ISO 42001 and AI Governance FAQ
11 May 2026
Knowledge
AI
AI Governance
AI governance is the process of managing AI-related risks and legal requirements, and ensuring AI is used ethically and responsibly, in an organisation. It is an essential function for any organisation that develops or uses AI systems.
ISO 42001 provides the requirements for an AI management system (AIMS) – a structured system for managing AI risk, meeting AI-related legal requirements, and developing or using AI systems responsibly.
The NIST AI Risk Management Framework is a system for managing risks to AI trustworthiness in the design, development and use of AI.
ISO 27001 defines requirements for an information security management system (ISMS), which focuses on protecting the confidentiality, integrity and availability of the information an organisation holds.
ISO 42001 defines requirements for an AI management system (AIMS), which focuses on managing AI risk, meeting AI-related legal requirements and developing or using AI responsibly.
ISO 27001 is ideal for organisations that want to protect the data they hold against cyber attacks and data breaches. ISO 42001 is ideal for organisations that want to take a structured approach to developing or using AI systems.
ISO 42001 is an international standard for AI management systems (AIMS), which focuses on managing AI risk, meeting AI-related legal requirements and developing or using AI responsibly.
System and Organizational Controls (SOC) 2 is a framework that evaluates how service organisations (for example, Cloud service providers) manage the privacy and security of customer data.
ISO 42001 provides a structured approach to developing or using AI systems suitable for any organisation. SOC 2 is used by service organisations to demonstrate the security, integrity and availability of their systems.
Like all ISO standards, ISO 42001 compliance is voluntary. ISO 42001 can help organisations ensure they meet the requirements of AI-related laws and regulations, but its use is not mandatory.
The development and use of AI systems exposes organisations to many new risks. Without a formal system to identify and mitigate those risks, an organisation may fall victim to them. ISO 42001 helps organisations understand the AI-related risks it faces and take steps to mitigate them. It also helps embed a culture of responsible development and use of AI systems across the organisation.
Accredited ISO 42001 certification lasts for three years. As your certificate nears expiration, you can undergo a recertification audit to renew your certification for a further three years.
If your organisation has chosen to implement ISO 42001, your first step is to conduct a readiness assessment. This will highlight the areas which need the most work and allow you to develop a project plan. The plan should define how you will implement each requirement, the resources and responsibilities, and clear timeframes for completion.
Once you have implemented the Standard, the next step is to select a suitable certification body and apply for certification. The certification process usually involves two audits, and if they are successfully completed, the certification body will issue your certificate.
The EU AI Act, which was published in 2024, is the world’s first legal framework governing the development and use of AI. Some of its requirements took effect in 2025, and the rest take effect over the course of 2026 and 2027.
- The Act prohibits a range of AI systems that could have a detrimental effect on society, such as biometric categorisation and social scoring.
- Providers of ‘high-risk’ AI systems (those involved in safety functions, education, employment and access to essential public services, etc.) are subject to a range of obligations. These include risk and quality management, governance, documentation and record keeping, reporting, and human oversight.
- Providers of commercial general-purpose AI (GPAI) systems (those that can perform a wide array of tasks) are subject to requirements related to risk management, technical documentation, compliance with copyright law and transparency. If they are established outside the EU, they must also appoint an EU representative.
- Providers of GPAI systems with systemic risk must also evaluate and test the model to identify and mitigate potential risks, and report serious incidents.
- Providers of both high- and limited-risk AI systems must ensure users are always aware they are interacting with an AI system.
- Deployers (users) of AI systems are subject to requirements related to human oversight, logging and monitoring, transparency, and use in accordance with the provider’s instructions, depending on the type of AI system.
If your organisation develops or provides AI systems for use in EU member states, it is likely to be subject to requirements under the Act.
If your organisation uses high-risk AI systems in the workplace (for example, for employment or training purposes), then it is likely to be subject to requirements under the Act.
Organisations that use AI systems outside of the EU, but that use the output of those systems in the EU (for example, a US organisation that uses AI in the US to screen EU-based applicants for roles at an EU subsidiary) are also likely to be subject to requirements under the Act.
The Act prohibits a range of AI systems that could have a detrimental effect on society, such as:
- Biometric categorisation that infers sensitive attributes, such as race or political opinions;
- Social scoring based on behaviour or personal traits;
- Systems that manipulate, deceive, or take advantage of vulnerabilities such as age or disability;
- Systems that infer emotions in workplaces or educational institutions;
- Systems that compile facial recognition databases by untargeted scraping of images from the Internet or CCTV; and
- Real-time remote biometric identification in public spaces for law enforcement purposes, except when certain exceptions apply.
ISO 42001 provides a structured framework that helps organisations implement the governance, risk management and operational controls needed to meet many of the obligations under the AI Act. It offers an ideal foundation on which to build your compliance programme.
The NIST AI Risk Management Framework is focused on managing trustworthiness risks in AI systems, and so can be used to address some aspects of the EU AI Act. Organisations looking for a more rounded foundation for compliance with the Act should also consider ISO 42001.
High-risk AI systems under the AI Act include systems used for recruitment and selection, placing targeted job ads, analysing or filtering applications, and evaluating candidates. They also include systems used to make employment or workplace decisions, allocate tasks based on personal characteristics or behaviour, and monitor or evaluate performance. If your organisation uses AI for these purposes, it is likely to have obligations under the Act.
High-risk AI systems also include those involved in determining access and admission to educational or vocational training, evaluating learning outcomes, assessing appropriate education levels, or that monitor or detect prohibited behaviour during testing. If your organisation uses AI for these purposes, it is likely to have obligations under the Act.