New ISO/IEC 42005 – AI System Impact Assessment

In the absence of a globally accepted framework on AI system impact assessments, the ISO and IEC’s 42005 serves as a guide to stakeholders in the global AI ecosystem, i.e., developers, deployers and users of AI systems. 

An impact assessment is a helpful tool in identifying potential ethical, legal, social and technical risks that may arise in the lifecycle of AI systems. By identifying such risks, developers of AI systems and products can effectively address and mitigate their potential harms, with continuous assessment efforts taking place at the development, deployment and adoption stages. 

This process is beneficial to organizations as they can document and indicate steps taken to ensure ethical design and development of responsible AI tools, which in turn builds trust among its users, regulators and the general public. The Standard also highlights how AI system impact assessment processes can be integrated into an organization’s AI risk management and AI management system.

In line with this, the ISO 42005 provides a structure for carrying out impact assessments that help organizations align AI development with best practice values and principles relevant to trustworthy AI – including fairness, safety and human- centred design. It lays out steps to follow in developing and implementing an AI system impact assessment, through several steps such as the scope and timing of the assessment, allocation of responsibilities, establishing thresholds for sensitive uses, restrictive uses and impact scaled.  

The ISO 42005 also provides guidance on documenting such an AI system impact assessment, including a review of the AI system information, data information and quality, algorithm and model information, the deployment environment and actual and reasonably foreseeable impacts. 

It further highlights how this AI system assessment process can be aligned with other already existing processes such as data protection impact assessments and fundamental rights impact assessments to ensure harmony and alignment in outcomes, including on recommendations and the subsequent compliance needs.

The ISO 42005 forms part of the previously launched standards in the AI ecosystem including the ISO/IEC 38507 for governance implications of the use of AI by organizations, the ISO/IEC 23894 guidance on risk management and the ISO/IEC 42001on conformity assessment (via a management system). 

The ISO/IEC 42005: 2025 can be found here.

en_USEnglish