Go back to menu

Artificial Intelligence Risk

Part Three: Recent Global Highlights

05 February 2020

Artificial intelligence (AI) creates huge opportunities for businesses globally across all sectors. However, the use of AI also brings the potential for significant legal, ethical and reputational exposure. Recently we have seen global regulators grappling with these risks, with the announcement of a vast number of legal developments, including the publication of legislation and ethical guidelines. Civil courts have been active too, with several recent judgments addressing liability for AI.

In this four part series of articles we consider some of the emerging global themes and highlight recent international developments. We also explore what 2020 is likely to mean for the global regulatory outlook and the steps businesses should be taking now to minimise the risks arising from their use of AI. 

In part three we look back at developments from 2019.

The Year in Review

Global companies need a global approach. Each month of 2019 brought new AI developments, and we have highlighted some of the key events here. There will be many further developments in 2020.

  • French Senate’s report on the European Strategy for AI. The report argues in favour of a co-ordinated approach in Europe and increased investment in AI in Europe.
  • First edition of Singapore’s Model AI Governance Framework. The accountability-based framework, which is the first of its kind in Asia, sets out guiding principles on the use of AI and provides practical measures that can be implemented by organisations using AI. The framework focuses on: (1) internal governance; (2) decision-making models; (3) operations management; and (4) customer relationship management.
  • German Association for the Digital Economy’s Guidelines for AI. These address digital ethics, challenges for the labour market and the relationship between industry, politics and society.
  • Updated Memorandum of Understanding between the UK’s Financial Conduct Authority (FCA) and Information Commissioner’s Office (ICO). This includes new paragraphs on sharing confidential information (including personal data) and provides that the FCA and ICO can run investigations in parallel if necessary. This means businesses could face enforcement action from both regulators. Both have significant sanctioning powers. Under the GDPR, the ICO can enforce a penalty of up to four per cent of global turnover and has already proposed to fine British Airways 1.5 per cent of its 2017 global turnover for GDPR breaches. The FCA can also rely on the broad principle of market efficiency to impose significant fines.
  • Code of Conduct for AI and other data-driven technologies for the UK’s National Health Service. This encourages developers to meet key principles and makes it easier for companies to work with the health service to develop new technologies.
  • Singapore International Commercial Court judgment in B2C2 Ltd v Quoine Ltd. One of the first civil cases addressing contractual liability for an algorithm. When considering whether contracts entered into by computer systems pursuant to an algorithm were void for unilateral mistake, the court held that the mind in question was that of the programmer at the time the program was written (rather than at the point of contract). Although the deterministic software in question was distinguished from AI, the court also considered whether AI could, in due course, be said to have a mind of its own. It appeared to leave the door open by acknowledging that the law here will continue to develop.
  • Germany’s AI Observatory opens. It will analyse the effect of AI on labour and society and promote responsible and people-centred use of AI.
  • European Commission High-Level Expert Group’s Ethics Guidelines for Trustworthy AI. Europe stakes its claim to driving the global debate on rules for achieving ethical AI. The guidelines emphasise human oversight and fundamental rights; technical robustness and safety; privacy and data governance; transparency; diversity and fairness; societal and environmental well-being; and accountability.
  • Singapore’s Intellectual Property Office launches Accelerated Initiative for AI. The initiative aims to grant AI patents within six months of application.
  • OECD’s AI Guidelines. The OECD’s 36 members, along with non-members from South America and Romania, signed up to values-based principles endorsed by the US. The principles expressly mention the rule of law, human rights, democratic values, transparency and accountability.
  • Illinois passes the AI Video Interview Act and becomes the first US state to regulate the use of AI in job interviews. The law came into effect on 1 January 2020 and requires employers to gain consent from job applicants before using AI to evaluate candidates in video interviews.
  • Germany’s Institute for Standardisation announces work on AI Roadmap. This will contain reliable norms and standards to further advance AI made in Germany and is expected in autumn 2020.
  • France adopts new rules on testing autonomous vehicles on public roads. This includes requirements regarding the ability to deactivate the technology or take control of the vehicle and questions of liability when the delegated driving functions are activated
  • European Commission High-Level Expert Group’s Policy and Investment Recommendations for Trustworthy AI. These seek to guide AI towards sustainability, growth, competitiveness and inclusion.
  • Japan publishes Comprehensive AI Strategy. This sets out the Japanese government's top priorities for leveraging the use of AI and is based on the Social Principles of Human-Centric AI previously published by the Japanese government.
  • China’s New Generation of AI Governance Principles – Developing Responsible AI. These lay down an AI governance framework and the corresponding action plan in China.

“One major issue with the wider adoption of AI is trust. To build trust, there must be proper human oversight and explainability when designing and using AI – this is a consistent theme in the principles for ethical and responsible use of AI published in different jurisdictions. It has been predicted that we will see the rise of AI to govern AI. It remains to be seen whether such an innovative approach can generate sufficient trust from both users and regulators.” - Ling Ho, Partner, Hong Kong

  • UK Competition and Markets Authority’s Digital Markets Strategy launches. This includes consideration of new regulatory structures in digital markets. In particular, the CMA will continue to monitor the development of Machine Learning and AI to ensure it does not lead to anti‑competitive behaviour or consumer detriment.
  • France adopts a new law on the organisation and transformation of its health system and provides for the creation of a platform for health data. This will gather, organise and share national health data. It aims to promote innovation in the use of health data, including uses linked to the development of AI. The platform, referred to as the ‘Health Data Hub’, was officially created in November / December 2019.
  • Poland’s draft AI Development Policy for 2019-2027. The policy emphasises the need for a human-centric approach to AI and for education at all levels. AI development in Poland will be boosted through government-sponsored initiatives in the academic community and
    innovation platforms.
  • UK FCA publishes article on AI in the Boardroom. The advent of AI systems means that Boards and senior managers of regulated firms must take business responsibility for the major challenges and issues raised by AI.
  • Saudi Arabia’s Royal Decree to establish an AI Centre. This will enhance the drive towards innovation and digital transformation in Saudi Arabia and help develop efficiency through the use of AI and big data.
  • World Economic Forum paper on Responsible Use of Technology. This outlines a framework for organisations and governments to integrate ethics and human rights into technology development across three phases of the product life cycle: (1) design/development; (2) deployment/sale; (3) use and application.
  • UK High Court dismisses challenge to Automated Facial Recognition (AFR). An individual challenged the use of AFR by South Wales Police in public spaces on human rights grounds. The court ruled that the interference with the individual’s right to respect for private and family life was lawful because AFR was used for a legitimate aim; it struck a fair balance between the rights of the individual and the community; and was not disproportionate. The court pointed to the transparent use of AFR, the fact it was used for a limited time on each occasion, and that images were not stored if they did not match individuals on the police watch-list. The claimant is appealing to the Court of Appeal. In the meantime, companies considering using AFR should ensure they undertake a legal impact assessment as the considerations to be balanced will differ in each case.
  • Close of UK ICO consultation period for Framework for Auditing AI. A draft framework will follow in January 2020. It will include governance and accountability considerations and specific risk areas including fairness, accuracy and trade-offs.
  • The ICO has emphasised that firms should not underestimate the investment and effort it will expect from them.
  • Singapore establishes National AI Office. The office will set the national agenda for AI and catalyse efforts to work on national AI priorities.
  • French Patent Office updates guidelines. The update includes additional guidance on AI, including a list of AI or machine-learning technologies that can be patented (e.g. speech recognition, robotics and control/command processes, predictive analysis). This demonstrates that AI can now be protected under French patent law.
  • UK Intellectual Property Office updates patents Formalities Manual. The amendments confirm that AI tools will not be accepted as the “inventor” for the purposes of a patent application. Failure to name a person as an inventor will result in the patent application being refused.
  • European Commission Report on Liability for AI and other emerging digital technologies. The report clarifies that certain characteristics of new technologies may make it difficult to claim compensation and allocate liability fairly. The recommendations include strict liability for digital technologies operating in a public space, and important developments regarding who that liability should lie with; joint and several liability when multiple organisations create a product; new duties of care for operators of technology; and mandatory insurance for certain technology. We expect more activity from the EC in this area in 2020.
  • Singapore rolls out National AI Strategy. The strategy takes a human-centric approach. Its objectives are to: (1) position Singapore as a global hub for AI; (2) use AI to generate economic value and improve lives; and (3) improve society’s understanding of the benefits, limitations and risks of AI and equip the workforce with necessary AI competencies. AI technologies will be used in transport and logistics, municipal services, healthcare, education and border clearance.
  • Hong Kong Monetary Authority’s Guidance for the Banking Industry on using AI. The principles include the following: boards and senior management are accountable for AI-related outcomes; banks should ensure explainability and ongoing monitoring of AI applications; and banks using AI must ensure they produce ethical and fair outcomes and safeguard customers’ personal data.
  • Chinese Patent Office’s draft amendments to the Patent Examination Guidelines. Chinese companies have emerged as top developers of AI-based inventions. The amendments should further promote innovation by providing guidance on claim drafting, disclosure and assessing inventiveness.
  • World Intellectual Property Organization (WIPO) begins public consultation on AI and intellectual property policy. Comment is invited on WIPO’s draft issues paper, which will be used to form the basis of future discussions as to how intellectual property policy makers should approach the use of AI. The consultation closes on 14 February 2020.
  • UK ICO’s draft Guidance on Explaining Decisions made with AI. The guidance is detailed, and recognises that context is key, as there is no single approach to fit all scenarios. It also recognises that a balance needs to be struck; there are risks in both failing to provide an explanation and providing too much information. The consultation closes on 24 January 2020.
  • France adopts a new law with rules on autonomous vehicles. This empowers the French Government to adapt existing rules to autonomous vehicles, including in relation to the applicable liability regime. Measures must be taken within 24 months of the promulgation of the law. The law also addresses access to data from connected vehicles.
  • France establishes Pilot Ethics Committee for the Digital Age. The committee’s first recommendations will deal with chatbots, autonomous cars and medical diagnosis in the era of AI.

“We know that AI is going to be an increasingly powerful tool. We also know that governments, legislators and regulators will intervene. We can say with almost complete certainty that some companies will make big mistakes and suffer reputational damage, but that many will have put in place structures to minimise risk as far as possible whilst taking advantage of the huge opportunities. Being in that second group will not be easy, but should be at the very top of the corporate agenda in the 2020s.”- Phillip Souta, Head of UK Public Policy, London