The ISO/IEC 42001 standard introduce a management system for Artificial Intelligence (AI).
AI is a discipline that focuses on developing the ability of an IT system to imitate human characteristics such as reasoning, learning, planning, and creativity, through learning processes known as "machine learning". AI is increasingly being applied in contexts that use Information Technology to support business processes, including healthcare, finance, transport, human resources, and entertainment.
The integration of AI into processes that have traditionally relied on human thought raises a series of ethical concerns and risks associated with granting decision-making power to automated tools. These tools can change their behavior over time and may be influenced by not transparent factors beyond the control of process owners.
The AI management system aims to provide organizations - of any size and scope of activity - with the tools to govern the AI processes of their interest, whether as developers, suppliers or users of AI systems. This ensures that, the market is provided with guarantees of ethical, responsible, trustable, and safe use of AI.
With our many years of experience in the ICT sector and ISO/IEC 27001 and ISO/IEC 20000 certifications, our auditors have specific skills in Computer Science and programming. Additionally, they actively participate in Artificial Intelligence Study Communities to further their knowledge and expertise.
The ISO/IEC 42001 certification is valid for years and can be renewed at the end of the three-year period.
From a technical point of view, AI management systems refer to some several ISO framework standards, such as ISO/IEC TS 4213, 23053 and 5259 on "Machine Learning", ISO/IEC 5338 on the "System life cycle", as well as the methodological standards ISO/IEC 22989 (Concepts and terminology), 23894 (Risk Management), 24368 (Ethical aspects).
NIST has also developed a "Risk Management Framework".
In the legislative field, the "Artificial Intelligence Act" is being approved on the European territory, which will be the main regulation governing the use of Artificial Intelligence based on the level of risk assigned to its applications.