AboutStandardsBlog ✦ AI AssessmentGet a Quote →

ISO 42001 Explained — The New Standard for AI Governance

ISO/IEC 42001:2023 is the world's first international management system standard for artificial intelligence. It provides a structured framework for organisations that develop, provide, or use AI systems to manage risks, ensure responsible practices, and demonstrate trustworthiness. This guide explains what the standard covers, who needs it, and how to pursue certification.

AI & COMPLIANCE ISO 42001 EU AI Act

The Rise of AI Governance Standards

Artificial intelligence has moved from research labs into the core of business operations. From automated underwriting in insurance to predictive maintenance in manufacturing, from clinical decision support in healthcare to algorithmic trading in finance, AI systems now make or influence decisions that affect millions of people daily. With this adoption comes risk — bias in decision-making, lack of transparency, security vulnerabilities, privacy violations, and unintended consequences that can cause real harm.

Regulatory bodies worldwide have responded. The European Union's AI Act entered into force in August 2024, establishing a risk-based regulatory framework with significant penalties for non-compliance. Similar legislation is advancing in the UK, Canada, Brazil, and across Asia-Pacific. But regulation alone does not tell organisations how to govern AI responsibly. That is where ISO/IEC 42001 comes in.

Published in December 2023, ISO/IEC 42001 is titled "Information technology — Artificial intelligence — Management system for artificial intelligence." It is a certifiable management system standard, meaning organisations can undergo third-party audits and receive formal certification — just as they do with ISO 27001 for information security or ISO 9001 for quality management.

What ISO 42001 Covers

ISO 42001 follows the Harmonised Structure (HS) common to all ISO management system standards. If your organisation already holds ISO 27001 or ISO 9001 certification, the structure will be immediately familiar. The standard contains the following core clauses:

  • Clause 4 — Context of the Organisation: Understanding the internal and external factors that affect the AI management system (AIMS), identifying interested parties and their requirements, defining the scope of the AIMS, and establishing the management system itself.
  • Clause 5 — Leadership: Top management must demonstrate commitment to the AIMS, establish an AI policy that includes principles for responsible AI development and use, and assign roles, responsibilities, and authorities.
  • Clause 6 — Planning: Addresses risk and opportunity assessment specific to AI systems, including the establishment of AI objectives and plans to achieve them. This is where the AI-specific risk assessment methodology is defined.
  • Clause 7 — Support: Covers resources, competence, awareness, communication, and documented information. Notably, competence requirements for AI are specific — the organisation must ensure that personnel working on AI systems have appropriate skills in areas like machine learning, data science, ethics, and domain expertise.
  • Clause 8 — Operation: The operational planning and control of AI systems throughout their lifecycle. This includes AI system impact assessments, the management of data used for AI development, and controls applied during AI system development, deployment, and operation.
  • Clause 9 — Performance Evaluation: Monitoring, measurement, analysis, and evaluation of the AIMS and AI system performance. Includes internal audit requirements and management review.
  • Clause 10 — Improvement: Nonconformity management, corrective action, and continual improvement of the AIMS.

Annex A: AI Controls Reference

Like ISO 27001, ISO 42001 includes an Annex A that provides a set of reference controls. These controls are organised around the specific challenges of AI governance:

Control Area Focus
AI Policies Establishing policies for responsible AI that address fairness, transparency, accountability, safety, and privacy. These policies must be communicated to all relevant parties.
Internal Organisation Defining roles and responsibilities for AI governance, including oversight functions, AI ethics review boards, and escalation procedures for AI-related concerns.
Resources for AI Systems Managing the compute, data, and tooling resources needed for AI systems. This includes data quality management, data provenance, and infrastructure requirements.
AI System Impact Assessment Conducting assessments of the potential impacts of AI systems on individuals, groups, and society. This must consider both intended and unintended consequences.
AI System Lifecycle Controls covering the full lifecycle: design, data collection and preparation, model building and validation, deployment, operation, monitoring, and decommissioning.
Data for AI Systems Data governance controls including data quality, bias detection in training data, data lineage, consent management, and data protection measures.
Third-Party and Customer Relationships Managing AI-related risks in the supply chain, including requirements for third-party AI components, APIs, and pre-trained models.

The standard also includes Annex B (AI objectives and risk sources), Annex C (AI-specific reference processes), and Annex D (use of the AIMS across domains). These annexes provide practical guidance for implementation rather than additional mandatory requirements.

AI Risk Management Under ISO 42001

Risk management is at the heart of ISO 42001, but the nature of AI risk differs fundamentally from traditional information security or operational risk. The standard requires organisations to consider several categories of AI-specific risk:

  • Bias and fairness risk: AI systems may produce discriminatory outcomes due to biased training data, flawed model design, or inappropriate feature selection. The standard requires organisations to assess and mitigate bias throughout the AI lifecycle.
  • Transparency and explainability risk: Many AI systems, particularly deep learning models, operate as "black boxes." ISO 42001 requires organisations to determine the appropriate level of explainability for each AI system based on its risk profile and stakeholder needs.
  • Robustness and reliability risk: AI systems may behave unpredictably when encountering data that differs from their training distribution. The standard requires testing for edge cases, adversarial inputs, and distribution drift.
  • Privacy risk: AI systems frequently process personal data, and techniques like model inversion or membership inference can extract personal information from trained models. ISO 42001 requires privacy impact assessments for AI systems and appropriate technical safeguards.
  • Safety risk: In domains like healthcare, autonomous vehicles, and industrial automation, AI failures can cause physical harm. The standard requires safety assessments proportionate to the potential consequences of system failure.
  • Accountability risk: When AI systems make or influence decisions, clear lines of accountability must be maintained. The standard requires that human oversight mechanisms are established and that responsibility for AI outcomes is explicitly assigned.
  • Environmental risk: Training and operating large AI models consumes significant energy. While not yet a primary focus, the standard acknowledges environmental considerations as part of the broader impact assessment.

AI System Impact Assessment

One of the most significant requirements in ISO 42001 is the AI System Impact Assessment (AISIA). This goes beyond traditional risk assessment by evaluating the potential effects of an AI system on individuals, groups, communities, and society. The AISIA must be conducted before an AI system is deployed and must be reviewed periodically or when significant changes occur.

The impact assessment must consider direct impacts (the intended effects of the AI system), indirect impacts (secondary effects that may not be immediately obvious), cumulative impacts (the combined effect of the AI system with other systems and processes), and systemic impacts (broader effects on markets, social systems, or democratic processes).

For organisations subject to the EU AI Act, the AISIA maps directly to the fundamental rights impact assessment required for high-risk AI systems under Article 27 of the regulation. This makes ISO 42001 certification a practical pathway for demonstrating regulatory compliance.

The Relationship Between ISO 42001 and ISO 27001

ISO 42001 and ISO 27001 are complementary, not competing standards. ISO 27001 addresses information security — the protection of information assets from threats. ISO 42001 addresses AI governance — the responsible management of AI systems throughout their lifecycle. In practice, most organisations that deploy AI systems will need both.

The overlap occurs in several areas. Data protection and privacy controls are relevant to both standards. Access control and security monitoring apply to AI systems just as they do to other information systems. Supply chain security (critical in ISO 27001:2022's new Annex A) is directly relevant to managing third-party AI components.

Because both standards follow the Harmonised Structure, they can be efficiently integrated into a single management system. The context analysis, leadership commitment, resource management, internal audit, and management review processes can serve both standards simultaneously. Only the domain-specific controls (Annex A of each standard) require separate treatment.

Organisations already certified to ISO 27001 have a significant head start with ISO 42001. The management system infrastructure is already in place — the additional work centres on AI-specific policies, impact assessments, and lifecycle controls.

Who Needs ISO 42001 Certification?

ISO 42001 is relevant to any organisation that develops, provides, or uses AI systems. The standard distinguishes between these three roles because the risks and responsibilities differ:

  • AI developers build AI models and systems. They bear responsibility for design decisions that affect fairness, transparency, and robustness. This includes technology companies, AI research organisations, and internal AI teams within enterprises.
  • AI providers make AI systems available to others, whether through products, services, or APIs. They must ensure their offerings meet the governance requirements expected by their customers and regulators. This includes SaaS vendors, cloud AI service providers, and platform companies.
  • AI users deploy and operate AI systems within their business processes. Even if they did not develop the AI, they are responsible for how it is applied, monitored, and governed in their context. This includes virtually every large enterprise that uses AI-powered tools.

Certification is particularly valuable for organisations in regulated industries (financial services, healthcare, pharmaceuticals, insurance), organisations that supply AI systems to enterprise customers, companies operating in the EU that fall under the AI Act's requirements, and any organisation that wants to differentiate itself on the basis of responsible AI practices.

Alignment with the EU AI Act

The EU AI Act classifies AI systems into four risk categories: unacceptable risk (prohibited), high risk (subject to extensive requirements), limited risk (transparency obligations), and minimal risk (no specific requirements). For high-risk AI systems, the Act imposes requirements covering risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.

ISO 42001 does not provide automatic compliance with the EU AI Act, but it provides a structured framework for meeting many of the Act's requirements. Specifically:

  • The AI risk management process in ISO 42001 (Clause 6) maps to the EU AI Act's risk management requirements (Article 9).
  • The data governance controls in Annex A align with the Act's data quality requirements (Article 10).
  • The AI System Impact Assessment addresses the fundamental rights impact assessment (Article 27).
  • The lifecycle controls cover technical documentation (Article 11) and record-keeping (Article 12).
  • The transparency and explainability requirements support the Act's transparency obligations (Articles 13-14).
  • The human oversight controls align with the Act's human oversight requirements (Article 14).

The European Commission has indicated that harmonised standards will play a role in establishing a presumption of conformity with the AI Act. While ISO 42001 is an international standard rather than a European harmonised standard (which would require adoption by CEN/CENELEC), it is widely expected to form the basis for, or strongly influence, the harmonised standards that will eventually be published.

The Certification Process

ISO 42001 certification follows the same two-stage audit process as other ISO management system certifications:

Stage 1 — Documentation Review: The certification body reviews the organisation's AIMS documentation, including the AI policy, scope statement, risk assessment methodology, Statement of Applicability for Annex A controls, AI system inventory, impact assessments, and supporting procedures. The auditor verifies that the management system is designed to meet the standard's requirements and identifies any areas that need attention before Stage 2.

Stage 2 — Implementation Audit: Auditors conduct on-site (or remote) assessments to verify that the AIMS is implemented and effective. This involves interviewing personnel, reviewing records, examining AI systems and their documentation, and testing the effectiveness of controls. The audit covers all clauses of the standard and all applicable Annex A controls.

Upon successful completion of both stages, the organisation receives a certificate valid for three years, subject to annual surveillance audits. The surveillance audits review a subset of the management system each year, with the complete system covered over the three-year cycle.

Practical Steps to Get Started

Organisations considering ISO 42001 certification should take the following preparatory steps:

  • Inventory your AI systems. You cannot govern what you do not know about. Create a comprehensive inventory of all AI systems in use, under development, or provided to customers. Include details on the type of AI (machine learning, NLP, computer vision, etc.), the data used, the decisions influenced, and the stakeholders affected.
  • Assess your current governance maturity. Many organisations already have some AI governance practices in place, even if they are informal. Document what exists and identify gaps against the ISO 42001 requirements.
  • Establish an AI policy. This is the foundational document that articulates your organisation's commitment to responsible AI. It should address fairness, transparency, accountability, safety, privacy, and human oversight.
  • Conduct AI impact assessments. For each AI system, assess the potential impacts on individuals, groups, and society. Prioritise high-risk systems for immediate attention.
  • Integrate with existing management systems. If you hold ISO 27001 or other certifications, plan to integrate the AIMS from the outset. This avoids duplication and reduces the operational burden.
  • Build competence. AI governance requires a blend of technical AI expertise, legal and regulatory knowledge, ethics, and domain-specific understanding. Identify skill gaps and invest in training.
  • Engage leadership. AI governance is not an IT project — it requires executive sponsorship and cross-functional involvement. Present the business case, including regulatory compliance, customer trust, and risk reduction.

How BALTUM Can Help

BALTUM offers comprehensive support for organisations pursuing ISO 42001 certification. Our AI governance specialists work with clients across multiple industries and regulatory environments. Our services include:

  • AI Governance Readiness Assessment: A thorough evaluation of your current AI practices against ISO 42001 requirements, delivered as a prioritised implementation roadmap.
  • AIMS Implementation Support: Hands-on guidance through the design and implementation of your AI Management System, including policy development, risk methodology, impact assessment frameworks, and control implementation.
  • AI System Impact Assessment: Expert facilitation of impact assessments for your AI systems, with particular attention to bias, fairness, transparency, and human rights considerations.
  • Integrated Management System Design: For organisations that hold ISO 27001 or other certifications, we design integrated management systems that cover AI governance without unnecessary duplication.
  • EU AI Act Compliance Mapping: Detailed mapping of your AIMS to EU AI Act requirements, identifying any gaps that certification alone may not cover.
  • Pre-Certification Audit: A full mock audit conducted by experienced assessors to ensure readiness before your certification body's visit.

Responsible AI governance is quickly becoming a competitive necessity, not just a regulatory requirement. Organisations that establish robust AI management systems now will be better positioned to scale their AI capabilities with confidence, meet evolving regulatory expectations, and earn the trust of customers, partners, and the public. ISO 42001 provides the framework — the task is to implement it with the rigour and commitment the technology demands.