/ 3 min read / Reed Smith Client Alerts

Singapore proposes framework for generative AI

Authors

Eng Han Goh (Resource Law LLC)

Key takeaways

  • The framework furthers international consensus on generative AI governance
  • The framework expands on the existing AI Model Governance Framework

Introduction

The AI Verify Foundation (the AIVF) and the Infocomm Media Development Authority (the IMDA) have proposed a draft Model AI Governance Framework for Generative AI (Model Framework), which expands on the existing AI Model Governance Framework last updated in 2020. The Model Framework is expected to be finalised in mid-2024 after considering feedback from both the Singapore and international community. A public consultation on the Model Framework is open until 15 March 2024.

Scope of the Model Framework

The Model Framework incorporates previous discussions and technical work conducted by the IMDA and other partners on generative AI. The previous AI Model Governance Framework was focused on “traditional” AI and did not take into account the significant transformative potential and risks unique to generative AI. Generative AI refers to AI models that can generate text, images and other media based on the patterns and structure of their learning inputs.

The Model Framework aims to set out a “systematic and balanced approach” for generative AI concerns and facilitate conversations among the international community on how generative AI can be developed in a trusted manner.

The Model Framework addresses nine dimensions to facilitate a trusted ecosystem for generative AI and includes practical suggestions for each dimension:

  1. Accountability – this incentivises players in the AI development cycle to be responsible to end users. Accountability could be promoted via a framework that allocates responsibility, similar to cloud and software development stacks.
  2. Data – this is a core element of AI development and could involve contentious data such as personal data and copyright material. Practical measures must be implemented to ensure contentious data is handled transparently and in compliance with applicable laws.
  3. Trusted development and deployment – this is necessary to promote the broader use of AI by end users. Transparency should extend not only to end users but also to other industry players so that the industry can develop best practices that enhance awareness and safety over time.
  4. Incident reporting – an established reporting system allows for incidents to be addressed promptly. Apart from limiting damage, an established reporting system also enables continuous improvement of the underlying AI systems.
  5. Testing and assurance – internal testing is insufficient to grow a trusted ecosystem. Robust third-party testing similar to the finance sector enables independent verification and promotes convergence on common testing standards.
  6. Security – generative AI’s ability to learn and create content on its own makes it highly attractive for cyber threats to be “injected” within the AI models. Security measures must also be taken for AI models, not just the software stack.
  7. Content provenance – generative AI could be used to exacerbate misinformation, such as deepfake videos promoting scams. Transparency about the source of content enables end users to consume online content in an informed manner. This can be achieved by new technical solutions, such as digital watermarking to identify AI-generated pictures and original pictures.
  8. Safety and alignment research and development (R&D) – the nascent state of generative AI leaves some risks unidentified and unaddressed. More R&D is needed to align AI models with human intention and values, and pace them with commercially driven growth seen in models such as ChatGPT.
  9. AI for public good – this recognises that responsible AI goes beyond risk mitigation. Various stakeholders should contribute to making an AI-enabled future accessible to all people and businesses. This includes promoting AI adoption in the public sector and upskilling workers so that they can take advantage of the latest AI tools.

The Model Framework’s impact on the generative AI ecosystem

The Model Framework takes a step further towards building international consensus in AI governance. In particular, it complements recent mapping and interoperability of national AI governance frameworks between Singapore and the United States. It will also serve as a useful reference point for future interoperability of frameworks between Singapore and other countries.

The Model Framework has identified four concrete touchpoints where AI can have “beneficial and long-term effects”:

  1. Democratising access to technology
  2. Public service delivery
  3. Workforce
  4. Sustainability

The Model Framework’s focus on these four touchpoints helps to build trust among the public. Together with good governance, this facilitates a conducive environment where generative AI can be used safely and confidently.

Conclusion

The Model Framework is timely in catalysing the discussion of generative AI governance. It has the potential to be the frame of reference for other jurisdictions in this nascent area. It also provides clarity for generative AI stakeholders who are developing new offerings in this arena.

Our technology lawyers are experienced and highly familiar with the sector’s latest developments. If you wish to discuss any aspects of this alert, please reach out to our team below or your usual Reed Smith contact.

Client Alert 2024-016

Related Insights