Home Standards ✦ AI Assessment Get a Quote →

ISO/IEC 42001 — AI Management System

The world's first international standard for AI Management Systems. ISO 42001 certification demonstrates that your organisation governs AI responsibly — with risk-based controls, impact assessments, and transparent accountability structures. Relevant for any organisation that develops, deploys, or procures AI systems.

ISO 42001:2023 AI Governance AI Act alignment IAF MLA recognised

What is ISO/IEC 42001?

ISO/IEC 42001:2023 is the first international standard specifically addressing artificial intelligence management systems (AIMS). Published in December 2023, it provides a framework for organisations to manage the responsible development, provision, and use of AI systems — including requirements for governance, risk management, impact assessment, and continual improvement.

The standard follows the High Level Structure (HLS) common to ISO management system standards, making it highly compatible with ISO 27001 (information security), ISO 9001 (quality), and other certifications your organisation may already hold.

Who Should Pursue ISO 42001 Certification?

  • AI developers and AI platform providers
  • SaaS companies embedding AI into their products
  • Financial services firms using algorithmic decision-making
  • Healthcare organisations deploying AI diagnostic or clinical tools
  • Public sector bodies and government agencies procuring AI systems
  • Any organisation needing to demonstrate AI Act compliance readiness

ISO 42001 and the EU AI Act

The EU AI Act, which entered into force in August 2024 and applies fully from August 2026, establishes a risk-based regulatory framework for AI systems in the EU. ISO 42001 certification provides strong alignment with AI Act obligations — particularly for providers of high-risk AI systems who must demonstrate documented risk management, transparency, and human oversight measures.

BALTUM maps your ISO 42001 AIMS controls directly to AI Act requirements across risk classification, technical documentation, conformity assessment, and post-market monitoring obligations — enabling dual compliance from a single integrated programme.

Key Requirements of ISO 42001

  • Organisational context: Identifying internal and external issues relevant to AI governance; stakeholder requirements and expectations.
  • AI Policy: Top-level commitment document establishing the organisation's approach to responsible AI.
  • AI Risk and Impact Assessment: Structured assessment of AI-specific risks — including bias, opacity, and unintended outcomes — and corresponding treatment plans.
  • AI System Objectives: Documented intended purpose, capabilities, and operational constraints for each AI system in scope.
  • Data Governance: Controls addressing data quality, provenance, and representativeness.
  • Human Oversight: Mechanisms ensuring meaningful human review of AI-generated outputs or decisions.
  • Monitoring and Continual Improvement: Post-deployment performance monitoring, incident tracking, and systematic review cycles.

Integrating ISO 42001 with ISO 27001

The majority of organisations pursuing ISO 42001 already hold or are pursuing ISO 27001. The two standards share the same High Level Structure, enabling a highly efficient integrated implementation that shares a common management system framework, risk methodology, audit programme, and documentation structure.

BALTUM's integrated ISO 27001 + ISO 42001 programme typically reduces total implementation effort by 35–45% compared to sequential standalone engagements.

BALTUM AI Assessment Tool

Before committing to a full ISO 42001 engagement, use BALTUM's AI-powered readiness assessment tool to get an instant scored gap analysis across the five key control domains. The assessment takes under 2 minutes and requires no registration.