CSA launches guidelines and companion guide on securing AI systems
29 October 2024
On 15 October 2024, the Cybersecurity Security Agency (“CSA”) announced the launch of the Guidelines on Securing AI Systems (“Guidelines”) and Companion Guide on Securing AI Systems (“Companion Guide”) at the Singapore International Cyber Week 2024. This follows a public consultation by CSA from 31 July 2024 to 15 September 2024 to seek comments on the draft versions of the Guidelines and Companion Guide. For more information on the public consultation, please read our article “CSA seeks comments on draft guidelines and companion guide on securing AI systems”.
CSA developed the Guidelines for system owners to secure the use of artificial intelligence (“AI”) throughout its lifecycle. The Guidelines should be used alongside existing security best practices and requirements for IT environments. It is made clear in the Guidelines that only cybersecurity risks to AI systems are addressed.
As with good cybersecurity practice, CSA recommends that system owners take a lifecycle approach in considering security risks. Given the diversity of AI use cases, there is no one-size-fits-all solution to implementing security. As such, effective cybersecurity starts with conducting a risk assessment. The Guidelines identify potential security risks associated with the use of AI and recommend a four-step approach to tailor a systematic defence plan that best addresses an organisation’s highest priority risks:
- Conduct risk assessment, focusing on security risks related to AI systems.
- Prioritise which risks to address, based on risk level, impact, and available resources.
- Identify relevant actions and control measures to secure the AI system, such as by referencing those outlined in the Companion Guide, and implement these across the AI life cycle.
- Evaluate the residual risk after implementing security measures for the AI system to inform decisions about accepting or addressing residual risks.
The Guidelines also set out guidelines for mitigating security risks at each stage of the AI lifecycle as follows:
- Planning and design: Raise awareness of AI security threats and develop risk assessments.
- Development: Supply chain security and protection of AI assets.
- Deployment: Secure infrastructure, establish incident management processes, and AI benchmarking and red-teaming.
- Operations and maintenance: Monitor for security anomalies and establish vulnerability disclosure processes.
- End of life: Ensure secure and proper disposal of data and model artefacts.
As AI security is a developing field of work, and mitigation controls continue to evolve, CSA is also collaborating with AI and cybersecurity practitioners on the Companion Guide. This is intended as a community-driven resource. The Companion Guide complements the Guidelines as a useful reference containing practical measures and controls that system owners may consider as part of adopting the Guidelines, depending on their use case. As the field of AI security continues to evolve rapidly, the Companion Guide will be updated to account for technological developments.
The Guidelines and Companion Guide were developed by referencing established international industry guidelines and standards.
CSA strongly encourages organisational leaders, business owners, and AI and cybersecurity practitioners to adopt the Guidelines for implementing AI systems securely.
Reference materials
The following materials are available on the CSA website www.csa.gov.sg: