In April 2022, the International Standards Organisation issued a new standard, ISO 38507, to provide guidance for the governing body of an organisation that is using, or is considering the use of, artificial intelligence (AI) and to encourage organisations to use appropriate standards to underpin their governance of the use of AI.
From IT governance to AI governance
ISO/IEC 38500 is an international standard for IT governance, published in 2008. It indicates that management should follow six principles (responsibility, strategy, acquisition, performance, conformance, and human behaviour).
Since its publication in 2008, ISO/IEC 38500 has been used as a framework for IT use, pre-implementation assessment, and post-implementation, and has sequentially evolved through the 2015 revision and the development and publication of 38500 series.
Because AI is technically based on IT, it was decided to view it as an extension of IT governance and build AI governance in a form that takes into account the characteristics of AI. Accordingly, the need for ISO 38507 was established.
Where does the board fit in?
ISO 38507 emphasises that the governing body is central to the organisation, setting its purpose and approving the strategies necessary to achieve that purpose. The governing body has a degree of influence over the use and impact of AI on an organisation and must continually assess whether the existing governance is fit-for-purpose as the use of AI changes within an organisation.
The governing body’s accountability is emphasised as being maintained across the full lifecycle of the AI technology from purchase, implementation, deployment, testing and various project phases all the way to de-commissioning.
The diagram below from ISO 38507 demonstrates how the AI system life cycle changes from inception to decommission.
The complex nature of AI ecosystems means that the degree of oversight required by governing bodies depends on a variety of factors, including the following:
• the intended use of the AI system;
• the type of AI used;
• the potential benefit the AI system will deliver;
• the new risks that can accompany the AI system;
• the stage of implementation of the AI system, amongst others.
ISO 38507 recommends that organisations take the following actions, amongst others, to place necessary constraints on the use of AI:
Increase oversight of compliance
Governance oversight within organisations should be based on policies set by the organisation and should identify effective individual and collective accountability in an appropriate chain of responsibility, which is set alongside the context of use of AI.
This includes putting policies in place to make sure AI is used appropriately, there is sufficient human oversight in place and any persons using AI are properly trained and know how to raise concerns. Legal requirements or obligations may be determined for using such technologies alongside the risk appetite of the organisation.
Address the scope of use of AI
Formulating a description of the AI system, by way of its algorithms, data and models, would assist in being transparent enough to ensure the AI technology is being deployed for its intended use.
Assess and address the impact on stakeholders
ISO 38507 notes that the governing body is responsible outside of the context of AI of shaping and defining the organisation’s desired culture, which has an impact on stakeholders connected to the organisation.
ISO 38507 notes the human impact on an organisation’s culture and values. These are implicitly embedded in the behaviour of staff and advocate for human involvement to a degree in the AI process, ensuring that AI systems can be monitored and corrected when needed.
A “cultures and values board” or an “ethics review board” might be set up to supervise the impact of AI systems and make sure it is aligned to an organisation’s values and culture.
The future for AI
An organisation’s governing body shapes its purpose, mission, vision, ethos, values and culture, and has a central role in steering the strategy, resource and oversight of such activities. Governance of AI itself is key for the adoption of AI.
The stats differ widely depending on how ‘AI adoption’ in the EU is measured (7% Eurostat, 2021 or 42% European Commission, 2020), but the fact remains that a key barrier to increased uptake in use and trust in AI is how exactly AI should be governed.
Whilst there is no universal standard on what exactly AI governance should look like, this poses a significant opportunity for legislators globally to map out what they want AI regulation to look like.
ISO have embraced the development of a separate ISO standard on AI risk management, which fits alongside the UK’s National AI Strategy which places strong emphasis on the development of global technical standards.
Earlier this year, the UK government announced the creation of a new AI Standards Hub to help organisations better utilise and benefit from AI. We hope that ISO 38507 will be promoted by the AI Standards Hub, as an additional tool that can be offered to the UK AI community.
Like all ISO standards, the publication of ISO 38507 is just the starting point, and it will continue to change as the standard is used globally. However, we expect that general governance will be the framework that management will refer to, and as a result, some form of governance assessment will be required.
Currently, ISO is developing a standard for the evaluation of IT governance, and it is expected that AI governance will require a similar standard development in the near future.
Sam De Silva is a partner and Barbara Zapisetskaya a senior associate at international law firm, CMS Cameron McKenna Nabarro Olswang LLP.