Singapore Publishes Model AI Governance Framework
The ethics and governance of artificial intelligence
12 February 2019
The release of the first edition of Singapore's model artificial intelligence (AI) governance framework for public consultation, pilot adoption and feedback was announced by Singapore's Minister of Communications and Information, S. Iswaran, at the World Economic Forum Annual Meeting at Davos, Switzerland. The Model Framework, published by Singapore's Personal Data Protection Commission (PDPC), is the first in Asia to provide detailed and readily implementable guidance that organisations may put in place to address key ethical and governance issues when deploying AI solutions.
The Model Framework is one of several AI initiatives in Singapore and follows the establishment of Singapore's Advisory Council on the Ethical Use of AI and Data, which is chaired by former Attorney-General V.K. Rajah SC, and the establishment of a Research Programme on the Governance of AI and Data Use at the Singapore Management University to advance and inform scholarly research on AI governance issues.
The Model Framework builds on the themes first raised in a discussion paper published in June 2018 by the PDPC and the Info-communications Media Development Authority of Singapore (IMDA).
The Model Framework is algorithm-agnostic, technology-agnostic, and sector-agnostic. It provides guidance to organisations that are seeking to deploy AI solutions at scale so that they may do so in a responsible manner. It is intended to help organisations
- build customer confidence in AI through their responsible use of such technologies to mitigate the different types of risk involved in AI deployment
- demonstrate reasonable efforts to align their internal policies, structures and processes with relevant accountability-based practices in data management and protection.
The Model Framework is voluntary and the extent to which organisations adopt the recommendations in the Model Framework depends on several factors, including the nature and complexity of the AI used, the extent to which AI is employed in decision-making, and the severity and probability of the impact of the autonomous decision on individuals.
The Model Framework is also underpinned by the two guiding principles:
- that organisations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair, and
- that AI solutions should be human-centric.
Guidance is provided under the Model Framework in four key areas: (a) internal governance structures and measures, (b) determining AI decision-making models, (c) operations management, and (d) customer relationship management.
Internal Governance Structures and Measures
Organisations should have internal governance structures and measures to ensure robust oversight of their use of AI. Clear roles and responsibilities should be assigned to appropriate personnel and/or departments for the activities involved in AI deployment. Personnel should be properly trained and provided with the necessary resources and guidance for them to discharge their duties. A sound system of risk management and internal controls that addresses the risks involved in the deployment of the selected AI model should also be implemented.
Determining AI Decision-Making Model
Organisations should decide on their commercial objectives before deploying AI solutions, and weigh them against the risks of using AI. As such, organisations should on an ongoing basis identify and review risks that are relevant to their technology solutions, mitigate those risks, and maintain a response plan should mitigation fail. This process should be documented in a periodically reviewed risk impact assessment. Organisations should also consider the probability and severity of harm to an individual when determining the level of human oversight in an organisation's decision-making process involving AI.
The quality and selection of data are critical to the success of an AI solution. It is hence important that good data accountability practices are maintained (e.g., understanding the lineage of data, ensuring data quality, minimising inherent bias, having different datasets for training, testing and validation, and periodically reviewing and updating datasets). Organisations should also consider measures to enhance the transparency of algorithms through the concepts of explainability, repeatability and traceability.
Customer Relationship Management
To build and maintain open relationships between organisations and individuals (including employees), organisations should put in place appropriate communication strategies. Under the Model Framework, it is hence suggested that greater transparency in the form of disclosures as to whether AI is used in products and/or services and how AI may affect an individual, as well as communication channels for customers (such as feedback channels and decision review channels), be adopted. In this connection, organisations should develop a policy on what explanations to provide to individuals. Human-AI user interfaces should also be duly tested, and usability problems addressed. In addition, organisations should consider whether they should provide individuals the option to opt-out, and whether this option should be offered by default or only upon request.
IMDA and the WEF will be engaging with organisations to discuss the Model Framework in greater detail and facilitate its adoption. The IMDA and the WEF's Centre for the Fourth Industrial Revolution (C4IR) will also develop a measurement matrix for the Model Framework which regulators and certification bodies globally can adopt and use when assessing their AI deployment initiatives, as well as a discussion paper outlining policy options on the facilitation of cross-border data flows under the collaboration.