The European Commission organized a webinar on 29 September, to discuss standardization for AI, which also considered requirements for high risk AI systems and conformity assessment and governance.

Participants from the Council of Europe, UNESCO, OECD and standards development bodies, ISO/IEC, ITU, CEN/CENELEC, IEEE and ETSI shared their perspectives and latest standards work on AI.

European Commission representatives welcomed the progress presented by ISO/IEC in the AI standardization field. Some foundational horizontal standards have already been produced and several additional horizontal and sectorial standards are in the pipeline.

There is some degree of alignment between the orientations of the standardization work in the AI field and the regulatory objectives of the Commission as presented in the White Paper of February 2020.

Future cooperation between standardization organizations and the European Commission is key to ensure that future standards can serve regulatory purposes and hence support the AI framework in the EU.

The Commission signalled that its intention is to strengthen ties with standardization organizations in order to ensure that high-quality standards can be made available to AI providers and the AI community by the time the AI framework will be applicable.

“It was a great forum with diverse participants in the standards development space. I was pleased to give an update on the joint work of IEC and ISO in SC 42. We are developing horizontal international standards for the entire AI ecosystem”, said Wael William Diab, who leads the IEC and ISO work in standardization for artificial intelligence.

State of play and approach to AI standardization

The purpose of the webinar was to look at the state of play and approach to AI standardization, including what should the approach be, where are the gaps and how should they be addressed, what are the best ways to link EU efforts for standardization and possible regulation of AI with relevant activities in the rest of the world and should there be increased coordination between relevant standardization activities.

Requirements for high-risk AI systems

Diab presented the work of SC 42 during session 2 on the requirements for high-risk AI systems.

“We are a systems integration committee and we’ve grown to having 47 national bodies that participate in our work and about 21 active projects. We have established liaisons with many of the colleagues on the call today, including the EC and OECD over the past year. We also have a lot of collaboration internally, for example our work on AI and functional safety with IEC TC 65, and the extension of the square requirements for software engineering with ISO/IEC JTC 1/SC 7. We are seeing interest in both the technical aspects addressing issues coming up with AI as well as contextual ones like ethics and bias and so on”, said Diab.

SC 42 is structured with six working groups (WGs) including: AI foundational standards,  data standards related to AI, Big Data and analytics, trustworthiness, use cases and applications, computational approaches and characteristics, and a joint WG on governance implications of AI with SC 40, which covers IT Service Management and IT Governance.

“There has been a lot of discussion on the idea of technical standards as well as the context of use of the technology SC 42 is taking that approach from the ecosystem. We are trying to assimilate the requirements from a technical perspective on the context of use of AI big data analytics by looking at some of these requirements from the application domain or regulatory or emerging societal and ethical requirements. We provide guidance to IEC and ISO committees looking at AI applications. IEC and ISO have many TCs that are looking at different application domains at various stages of roadmapping for AI inclusion”, said Diab.

SC 42 develops horizontal and foundational documents over which other standards can be developed, as well as open source implementations. This provides a way to bridge the gap between this notion of context of use of the technology and what that means in terms of technical requirements, for example ethical requirements and translating them into trustworthy technical standards that can be addressed.

AI stakeholders have recommended some general requirements for training datasets such as representativeness, accuracy and completeness of the data.

In answer to the question, can we standardize requirements (accuracy and completeness of data) and then certify data sets that can be reused, Diab responded: “SC 42 is not trying to standardize data sets. Part of the evolution of expanding the scope of our data WG, whether AI analytics or big data, is a growing theme of looking at quality parameters of data sets so in big data, the five Vs are things that we typically look at. It is key to consider some of the parameters for data sets that are used for training, and it is also key to trigger retraining. From that perspective there is more to be done in terms of standardization. Some people may think it is about standardizing the data sets themselves, but I think it is more about the parameters and that’s why we launched the multi-part series standard.”

SC 42 achievements and new work

SC 42 completed its big data foundational work, ISO/IEC Technical Report 24028 providing an overview of trustworthiness in AI was published.

The work on use cases, including over 130 examples for AI applications, is in the process of being published, but will be revised with new cases.

New work includes:

  • the expansion of scope of the big data WG to include data aspects related to AI, big data and analytics. There is a lot of commonality in looking at data quality properties for those three areas.
  • AI management systems standards (ISO/IEC 42001)
  • a new data quality series with an initial three projects related to analytics and machine learning.
  • a new extension for AI life cycle process
  • an extension for SQuaRE (ISO/IEC WD 5059, Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality Model for AI-based systems)
  • regarding the application of AI and domain specific AI standards, a new project ISO/IEC 5339 was approved that will focus on guidelines for AI applications. This will help bridge and enable application standards development based on the horizontal standards SC 42 is developing.

Approach to ethics:

All the sub groups are looking at ethical and societal concerns. A good example is the consideration of such concerns in the use cases. This is a new approach to use cases and SC 42 has specific standards to tie those requirements to the technical work that is being developed in WG 3 under the ISO/IEC 24368.

Management systems standards (MSS):

The idea here is to develop a management systems standard that could be specific to the process used for the development and potentially the assessment of how an AI developer has gone about developing a system. This new project has just been approved and assigned WG 1 for foundational standards which will start the development work. It is a novel approach that builds on the strength of management systems standards and AI ecosystem standardization which will enable broad adoption of AI.