19 June 2024

On 30 May 2024, Singapore launched the Model AI Governance Framework for Generative AI (“Gen AI Framework”) which seeks to present a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation. It requires all key stakeholders, including policymakers, industry, the research community, and the broader public, to collectively do their part.

Nine dimensions of the Gen AI Framework

The Gen AI Framework identifies nine dimensions to support a comprehensive and trusted AI ecosystem. Set out below is a snapshot of the nine dimensions:

  • Accountability: To put in place the right incentive structure for different players (such as model developers, application deployers, and cloud service providers) in the AI system development life cycle to be responsible to end-users.

The Gen AI Framework provides that there may be value in extending the cloud industry’s approach of allocating responsibility upfront through shared responsibility models to AI development. To better cover end-users, the Gen AI Framework states that it will be worth considering additional measures including concepts around indemnity and insurance that may potentially act as safety nets. It is also useful to consider updating legal frameworks to make them more flexible, and to allow emerging risks to be easily and fairly addressed. Lastly, for residual issues that fall through the cracks, alternative solutions such as no-fault insurance could be considered as well.

  • Data: To ensure data quality (for example, through using trusted data sources) and address potentially contentious training data (for example, personal data and copyright material) in a pragmatic way.

According to the Gen AI Framework, as personal data operates within existing legal regimes, a useful starting point is for policymakers to clarify how existing personal data laws apply to generative AI. It is also important to understand how privacy enhancing technologies (PETs) such as anonymisation techniques can be applied to AI, and potentially allow the usage of data while protecting data confidentiality and privacy. Given the large volume of data involved in AI training, there is value in developing approaches through open dialogue with various stakeholders to resolve difficult issues (such as concerns surrounding the use of copyright material in training datasets). At an organisational level, it would be good discipline for AI developers to undertake data quality control measures and adopt general best practices in data governance.

  • Trusted development and deployment: To enhance transparency around baseline safety and hygiene measures based on industry best practices in development, disclosure and evaluation.
    • Development: The Gen AI Framework states that safety measures are developing rapidly, and that model developers and application deployers are best placed to decide what to use. Even so, industry practices are starting to coalesce around some common safety practices such as fine-tuning techniques like Reinforcement Learning from Human Feedback (RLHF) to guide the model to generate “safer” output, Retrieval-Augmented Generation (RAG) and few-shot learning which are commonly used to reduce hallucinations and improve accuracy.
    • Disclosure: The Gen AI Framework provides that relevant information should be disclosed to downstream users. Areas of disclosure may include data used, training infrastructure, evaluation results, mitigation and safety measures, risks and limitations, intended use, and user data protection. The level of detail disclosed can be calibrated based on the need to be transparent vis-à-vis protecting proprietary information. Greater transparency to government will also be needed for models that pose potentially high risks.
    • Evaluation: The Gen AI Framework states that there is a need to work towards a more comprehensive and systematic approach to safety evaluations. The industry can develop approaches that consistently evaluate AI for both its front-end performance and back-end safety. Industry and sectoral policymakers will need to jointly improve evaluation benchmarks and tools, while still maintaining coherence between baseline and sector-specific requirements.
  • Incident reporting: To implement an incident management system for timely notification, remediation, and continuous improvements of AI systems.

Before incidents happen, software product owners adopt vulnerability reporting as part of an overall proactive security approach. AI developers can apply the same concept by allowing reporting channels for uncovered safety vulnerabilities in their AI systems. After incidents happen, organisations need internal processes to report the incident for timely notification and remediation. This may involve notifying other stakeholders such as the public as well as governments. Reporting should be proportionate, which means striking a balance between comprehensive reporting and practicality.

  • Testing and assurance: To provide an external source of validation and build trust through third-party testing and to develop common AI testing standards from established audited practices for quality and consistency of AI systems.

    Fostering the development of a third-party testing ecosystem involves two pivotal aspects:
    • How to test: Defining a testing methodology that is reliable and consistent, and specifying the scope of testing to complement internal testing.
    • Who to test: Identifying the entities to conduct testing that ensures independence.
  • Security: To adapt existing frameworks for information security (for example, the security-by-design security concept which seeks to minimise system vulnerabilities and reduce the attack surface through designing security into every phase of the systems development life cycle) and to develop new testing tools (for example, input filters and digital forensics tools for generative AI) to address new threat vectors that arise through generative AI models.
  • Content provenance: To explore technical solutions such as digital watermarking and cryptographic provenance to provide end-users with transparency as to the source of the content, and to combat harms like misinformation.

The Generative AI Framework provides that there is a need to work with key parties in the content life cycle such as publishers and content creators to support the embedding and display of digital watermarks and provenance details. To improve end-user experience and enable consumers to discern between non-AI and AI-generated content, the types of edits to be labelled can be standardised. End-users also need greater understanding of content provenance across the content life cycle and to learn to utilise tools to verify for authenticity.

  • Safety and alignment research & development (“R&D”): To accelerate R&D through global cooperation among AI safety R&D institutes to improve model alignment with human intention and values.
  • AI for public good: Responsible AI includes harnessing AI to benefit the public by democratising access to technology and improving digital literacy, improving public sector adoption, upskilling workers, and developing AI systems sustainably.

The Gen AI Framework is expected to evolve as technology and policy discussions develop.

Reference materials

The following materials are available from the Info-communications Media Development Authority of Singapore website www.imda.gov.sg and AI Verify Foundation website aiverifyfoundation.sg:

More

Knowledge Highlights 21 November 2024

Bill introduced to revamp and make permanent Simplified Insolvency Programme to support financially distressed companies

Read more

Knowledge Highlights 21 November 2024

Shared Responsibility Framework for FIs, Telcos, and consumers for phishing scams and revisions to E-Payments User Pro ...

Read more