Key takeaways
- The framework furthers international consensus on generative AI governance
- The framework expands on the existing AI Model Governance Framework
Introduction
The AI Verify Foundation (the AIVF) and the Infocomm Media Development Authority (the IMDA) have proposed a draft Model AI Governance Framework for Generative AI (Model Framework), which expands on the existing AI Model Governance Framework last updated in 2020. The Model Framework is expected to be finalised in mid-2024 after considering feedback from both the Singapore and international community. A public consultation on the Model Framework is open until 15 March 2024.
Scope of the Model Framework
The Model Framework incorporates previous discussions and technical work conducted by the IMDA and other partners on generative AI. The previous AI Model Governance Framework was focused on “traditional” AI and did not take into account the significant transformative potential and risks unique to generative AI. Generative AI refers to AI models that can generate text, images and other media based on the patterns and structure of their learning inputs.
The Model Framework aims to set out a “systematic and balanced approach” for generative AI concerns and facilitate conversations among the international community on how generative AI can be developed in a trusted manner.
The Model Framework addresses nine dimensions to facilitate a trusted ecosystem for generative AI and includes practical suggestions for each dimension:
- Accountability – this incentivises players in the AI development cycle to be responsible to end users. Accountability could be promoted via a framework that allocates responsibility, similar to cloud and software development stacks.
- Data – this is a core element of AI development and could involve contentious data such as personal data and copyright material. Practical measures must be implemented to ensure contentious data is handled transparently and in compliance with applicable laws.
- Trusted development and deployment – this is necessary to promote the broader use of AI by end users. Transparency should extend not only to end users but also to other industry players so that the industry can develop best practices that enhance awareness and safety over time.
- Incident reporting – an established reporting system allows for incidents to be addressed promptly. Apart from limiting damage, an established reporting system also enables continuous improvement of the underlying AI systems.
- Testing and assurance – internal testing is insufficient to grow a trusted ecosystem. Robust third-party testing similar to the finance sector enables independent verification and promotes convergence on common testing standards.
- Security – generative AI’s ability to learn and create content on its own makes it highly attractive for cyber threats to be “injected” within the AI models. Security measures must also be taken for AI models, not just the software stack.
- Content provenance – generative AI could be used to exacerbate misinformation, such as deepfake videos promoting scams. Transparency about the source of content enables end users to consume online content in an informed manner. This can be achieved by new technical solutions, such as digital watermarking to identify AI-generated pictures and original pictures.
- Safety and alignment research and development (R&D) – the nascent state of generative AI leaves some risks unidentified and unaddressed. More R&D is needed to align AI models with human intention and values, and pace them with commercially driven growth seen in models such as ChatGPT.
- AI for public good – this recognises that responsible AI goes beyond risk mitigation. Various stakeholders should contribute to making an AI-enabled future accessible to all people and businesses. This includes promoting AI adoption in the public sector and upskilling workers so that they can take advantage of the latest AI tools.