Tech Developments in APAC: Review and 2020 Outlook - AI
Get ready for dramatic increase in the number of AI laws
14 April 2020
As we enter a new decade and in this year of the rat, we look - in a series of articles - at tech developments in APAC, with a particular emphasis on the jurisdictions of Australia, China, Hong Kong and Singapore.
The key developments to note are in:
- AI: The use of AI, whilst creating huge opportunities in areas such as financial services, healthcare and autonomous vehicles, also brings the potential for significant legal, ethical and reputational exposure. APAC regulators have been considering these risks and we look at the various guidelines and ethical frameworks that have been published.
- Crypto-assets: Globally, crypto-assets were hardly out of the press (or the crosshairs of regulators) in 2019 and early 2020 with Facebook's proposed stablecoin Libra in its various iterations taking the lion's share of headlines. In APAC, we saw national initiatives in crypto-regulation in Hong Kong, Singapore and Australia and some of the first crypto-cases. [Read the second article in this series focussing on Crypto-assets ]
- Big Tech and Data: Big Tech firms refer to large companies with established technology platforms such as Alibaba, Amazon, Facebook, Google and Tencent. The financial services offerings of Big Tech firms are expected to grow with anti-trust and data privacy concerns arising from their significant resources and widespread access to customer data. [Read the third article in this series focussing on Data]
- Patent and IP Protection and Alternative Dispute Resolution: Arbitration is becoming an increasingly popular method of resolving IP disputes in light of its advantages including confidentiality, choice of specialist arbitrators and enabling the avoidance of multiple parallel proceedings in different jurisdictions. Various developments in arbitration in Hong Kong and Singapore in 2019 will facilitate this. We anticipate an increase in arbitration of FRAND / SEP and other patent and IP disputes, consistent with the growth of IP and technology disputes in previous years. [Read the final article in the series on IP]
Over the course of four articles we will be looking in more detail at each of these areas. In this first part of the series we focus on AI.
The use of AI, whilst creating huge opportunities, also brings risks. In a survey of 200 board members around the world carried out by the Economist Intelligence Unit on behalf of Clifford Chance, 88% agreed (somewhat or strongly) that they fully understood the implications of the use of data and AI. This is likely to be an overstatement, given that the use of AI is nascent in many companies and in light of the ever-changing regulatory landscape.
APAC regulators, in particular, financial regulators, have been considering these risks and key themes have emerged in their guidelines and ethical frameworks including the importance of human oversight, governance and accountability, as well as robustness of AI and a focus on ethical and transparent use including fairness, explainability and interpretability.
Tools for responsible AI management include clear leadership from companies' boards and oversight from an appropriate team of general counsels, senior data and compliance and risk officers, as well as a stand-alone framework incorporating applicable regulatory requirements. For further guidance on other tools such as AI risk audits, please access our January 2020 AI risk publication.
2019 Developments and Review
In January 2019, a Model AI Governance Framework, was issued by the Personal Data Protection Commission in Singapore (the first of its kind in Asia with a second edition in January 2020), setting out voluntary guiding principles and practical measures for organisations seeking to deploy AI at scale focussing on internal governance; the level of human involvement in AI-augmented decision-making; operations management and stakeholder interaction and communication. For more, see our briefing regarding the same.
March 2019 saw a Singapore International Commercial Court judgment, B2C2 Ltd v Quoine Ltd, which is one of the first civil cases to address the contracting mind for trades of cryto-currencies on an online exchange pursuant to an algorithm (and whether such trades or contracts were void for unilateral mistake). Due to a "technical glitch", the orders were executed at a rate 250 times the rate traded on the same day and the exchange cancelled the trades. The case was further considered by the Singapore Court of Appeal which handed down its judgment in February 2020. The exchange was ultimately held liable for breach of contract. The mind in question was held to be that of the programmer, whose knowledge was to be assessed from the point of programming up to the point that the relevant contract is formed. Whilst AI was not involved, there was discussion on how the law should be adopted to the new world of algorithm programmes and AI in a way which give rise to the results that reason and justice would lead one to expect. For more, see our briefing here.
In April 2019, the Intellectual Property Office of Singapore (IPOS) launched the Accelerated Initiative for Artificial Intelligence to expedite the application-to-grant process for AI patent applications to as fast as six months, which IPOS touts to be the fastest in the world. For instance, in August 2019, the IPOS announced that Alibaba Group Holdings Limited had been granted an AI patent in just three months, compared to an average of two to four years.
In June 2019, the Chinese National New Generation AI Governance Committee published on the Ministry of Science and Technology website its New Generation AI Governance Principles – Developing Responsible AI, laying down an AI development and governance framework for the basis of future law to ensure safe and secure use of AI at the same time as promoting development. The eight principles comprise: harmony with values and ethics; fairness and equality; open competition and green development; protection of personal data and privacy; transparency and control; accountability, inter-disciplinary and cross regional and border cooperation, and agile management and governance.
Two months later, in August 2019, a set of guidelines on the establishment of national AI Open Innovation Platforms was issued, setting out a framework for the development of such platforms in China. Such platforms are government-approved initiatives led by experienced technology companies. Successful examples of platforms already in operation prior to the release of the guidelines are the Baidu Autonomous Driving AI platform and Tencent Medical Imaging AI platform.
In the same month, guidelines were released laying down a plan to establish 20 pilot zones, cities that are committed to building an environment conducive to the development of AI. The first pilot zones will likely be concentrated in key areas earmarked for economic growth, namely, the Jingjinji Metropolitan Region (Beijing-Tianjin-Hebei), Yangtze River Economic Belt and Greater Bay Region (Guangdong, Hong Kong and Macau).
The new principles and guidelines give us a glimpse of China's vision for the development of AI and how major Chinese cities and government-approved initiatives are intended to operate in the future – hubs of innovation driven by ethical and sustainable technological enterprise. For more, see our previous article discussing legal updates from China and Hong Kong.
Regarding Hong Kong, in July 2019, the Hong Kong Securities and Futures Commission (SFC) published Guidelines on Online Distribution and Advisory Platforms dealing with the potential use of AI in the context of online distribution of investment products and "robo advice" (namely, automated investment advice). The guidelines require licensed corporations to provide sufficient information to clients on how key components of their services are generated such as how underlying algorithms operate, and the limitations and risks involved; properly and effectively manage and supervise development, operation and testing of algorithms used in digital advice tools, and ensure adequate staff with sufficient expertise and understanding of the technology.
In November 2019, the Hong Kong Monetary Authority (HKMA) issued its Guidance for the Banking Industry on using AI. The principles set out are consistent with the themes outlined above including boards and senior management being accountable for AI-related outcomes; banks being required to ensure the explainability and ongoing monitoring of AI applications and ensuring ethical and fair outcomes and the safeguarding of customers' personal data.
At the end of 2019, the HKMA published the report Reshaping Banking with Artificial Intelligence on the application of AI in the banking industry, a collaborative effort with PwC which conducted an industry-wide survey looking at how far Hong Kong’s banking sector has progressed in adopting this technology. The report shares popular AI solutions being implemented by banks globally and defines where Hong Kong is on the spectrum of AI development, considers how banks may implement AI and recommends ways in which different stakeholders can help develop Hong Kong into an AI Innovation hub.
2020 prediction: whilst best practice guidance will continue to be provided, will 2020 be the year that more formal legal requirements are imposed? We predict that there will be a dramatic increase in the number of AI laws.
There may be more scandals around lack of diversity in tech which may arise from issues of race, gender and sexuality. This arises as the public starts to realise how their information is being used and how they may be excluded.
2020 has seen the HKMA setting out its supervisory expectations and sound risk management practices for algorithmic trading (in light of the prevalence of the same with 40% of authorised institutions surveyed in 2018 undertaking the same) will see the HKMA issuing best practice guidance on consumer protection principles associated with the use of AI applications. In the second half of 2020, the Monetary Authority of Singapore (MAS) will issue a report on its findings and conclusions from its partnership with various financial institutions, which create a framework called Veritas, for the responsible adoption of AI, initially focussing on use cases in customer marketing, risk scoring and fraud detection.