Singapore launches AI Verify to promote transparency and invites organisations to pilot
28 June 2022
In May 2022, Singapore launched AI Verify, the world’s first artificial intelligence (“AI”) governance testing framework and toolkit, for organisations who want to demonstrate their implementation of responsible AI in an objective and verifiable way. This was announced by the Info-communications Media Development Authority of Singapore’s (“IMDA”) on 25 May 2022.
Developed by IMDA and the Personal Data Protection Commission (PDPC), AI Verify puts together a set of open-source testing solutions, including a set of process checks into a toolkit which will generate reports for developers, management and business partners, covering major areas affecting AI performance such as transparency, safety and resilience of the AI system and the accountability and oversight of AI systems.
AI Verify is currently available as a Minimum Viable Product (“MVP”) for system developers and owners who want to be more transparent about the performance of their AI systems through a combination of technical tests and process checks. IMDA is inviting participants from the broader industry to participate in the pilot phase of the MVP. Participants will have the unique opportunity to:
- have early and full access to an internationally-aligned AI governance testing framework and toolkit MVP and use it to conduct self-testing on their AI systems/models;
- produce reports to demonstrate transparency and build trust with their stakeholders;
- provide feedback to IMDA to help shape the MVP so that it can reflect industry’s needs and benefit the industry; and
- join the AI testing community to network, share and collaborate with other participating companies to build industry benchmarks and contribute to international standards development.
The pilot aims to:
- validate that the MVP can be implemented by owners and developers for a wider range and variety of AI systems;
- identify research and development opportunities for testing tools and engage research institutions and technology solution providers for collaboration;
- begin collating industry consensus on acceptable performances of AI systems in terms of metrics and thresholds;
- begin collating industry best practices on implementing trustworthy AI systems; and
- explore compatibility and interoperability with like-minded owners and developers of AI systems testing frameworks.
IMDA targets to release an updated AI Governance Testing Framework and Toolkit Version at the end of the pilot.
Reference materials
The following materials are available on the IMDA website www.imda.gov.sg: