Introduction
Artificial intelligence (AI) systems are increasingly used in various sectors and applications, such as health care, finance, education and public services, to enhance efficiency, productivity and innovation. As AI continues to be a transformative force across industries – driving efficiencies in sectors such as finance, health care and manufacturing – it also presents new vulnerabilities. The new Cyber Security Agency of Singapore (CSA) guidelines offer a proactive, lifecycle approach to counter these risks, encouraging businesses to implement secure AI systems by design and by default.
The guidelines
The CSA guidelines outline five critical stages to securing AI systems throughout their lifecycle:
- Planning and design: Businesses should anticipate potential threats during the design phase, including adversarial machine learning and supply chain attacks.
- Development: Emphasis is placed on securing AI assets and ensuring robust protection across the AI supply chain.
- Deployment: Deploying AI systems in secure environments with strong infrastructure safeguards is key.
- Operations and maintenance: Continuous monitoring for anomalies and implementing vulnerability disclosure processes are crucial for maintaining security over time.
- End of life: Proper disposal of AI-related data and artifacts at the end of their lifecycle ensures that outdated systems do not become a target for cyberattacks.
These stages offer a comprehensive blueprint for managing AI security risks from inception to decommissioning, helping businesses avoid data breaches and malicious manipulation of AI systems. The guidelines also include a self-assessment checklist and a glossary of terms to help organisations assess and improve their AI security posture and awareness.
The CSA’s guidelines were developed through extensive collaboration with industry experts and AI practitioners, including a public consultation period that gathered feedback from AI and cybersecurity companies. This input helped refine the guidelines and align them with international standards such as those from the U.S. Cybersecurity and Infrastructure Security Agency and the UK National Cyber Security Centre.
In addition, a Companion Guide has been created alongside the guidelines as a living document. The Companion Guide will evolve as AI technology and security threats develop. The Companion Guide curates best practices and offers practical advice on implementing security measures at each stage of the AI lifecycle.
Conclusion
These developments signal Singapore’s growing leadership in AI governance and cybersecurity as its digital economy grows. These guidelines represent both a challenge and an opportunity for businesses operating in AI-heavy industries. Implementing these guidelines will require substantial investments in secure infrastructure, continuous monitoring and incident response protocols, but give businesses a competitive edge by assuring customers and partners that their AI systems are safe, secure and reliable.
Our technology lawyers are experienced and highly familiar with the sector’s latest developments.
Reed Smith LLP is licensed to operate as a foreign law practice in Singapore under the name and style Reed Smith Pte Ltd (hereafter collectively, "Reed Smith"). Where advice on Singapore law is required, we will refer the matter to and work with Reed Smith's Formal Law Alliance partner in Singapore, Resource Law LLC, where necessary.
Client Alert 2024-220