Artificial Intelligence Risk
Part Four - AI Predictions 2020
05 February 2020
Artificial intelligence (AI) creates huge opportunities for businesses globally across all sectors. However, the use of AI also brings the potential for significant legal, ethical and reputational exposure. Recently we have seen global regulators grappling with these risks, with the announcement of a vast number of legal developments, including the publication of legislation and ethical guidelines. Civil courts have been active too, with several recent judgments addressing liability for AI.
In this four part series of articles we consider some of the emerging global themes and highlight recent international developments. We also explore what 2020 is likely to mean for the global regulatory outlook and the steps businesses should be taking now to minimise the risks arising from their use of AI.
In this final part we look at our predictions for AI in 2020.
Binding laws and regulatory frameworks
Globally, we will continue to see the evolution of legal and regulatory frameworks to address the use of AI across industries, with ongoing adaptation of existing law and regulation, alongside the implementation of new, stand-alone frameworks. Examples include:
- In the EU, we expect new AI-specific legislation, following the announcement by the European Commission’s new President Ursula von der Leyen that rules will be drawn up during her first 100 days in office. The White Paper on the legislative framework is due in February. It is expected to include proposals for a temporary ban on the use of facial recognition technology in public spaces, while its impact is assessed; and for binding regulations regarding high-risk applications of AI in sectors such as health, transport and policing.
- In France, we expect developments relating to mobility, autonomous vehicles and healthcare. Germany will establish 12 regional digital research hubs to connect businesses with partners from science and start‑ups, focusing on e‑payment, mobility and health. The UK’s Law Commission is also consulting on the regulatory framework for automated vehicles.
- In the US, on 7 January 2020, the White House Office of Science and Technology Policy published, for comment, its 10 principles for government agencies developing AI regulations. The California Consumer Privacy Act, in effect since 1 January 2020, requires entities to provide detailed disclosures on how personal information is used with AI. Proposals for a federal privacy law have been tabled for discussion which, if effected, would impact the use of AI. Additionally, a new rule in effect from January 2020 limits the export of geospatial AI software outside the US, highlighting the importance of such technology to national security.
- In Singapore, the Monetary Authority of Singapore has partnered with various financial institutions to create a framework, called Veritas, for the responsible adoption of AI and Data Analytics. Its report is expected in the second half of 2020.
Best practice guidance
We will continue to see the publication of detailed guidance by international regulators and industry-specific bodies that will expand “best practice” requirements for international firms. For example:
- The Hong Kong Monetary Authority will issue guidance on consumer protection principles.
- The UK’s data protection regulator, the Information Commissioner’s Office (ICO), will publish an AI auditing framework; further guidance on explaining AI decisions; and a data protection accountability toolkit.
Focus on systems and controls
- We can expect wider focus on how companies should comply with practical issues, such as governance of AI, explanations, testing and controls. Financial services firms are likely to be a test-ground, given the pronouncements by financial regulators in 2019 regarding the need for adequate systems and controls.
- Crucial topics to consider include AI systems’ fitness for purpose, accurate marketing, testing and accuracy, insurance, and increasing regulatory scrutiny on the use of AI in the financial services industry.
Focus on ethical use of AI
- We will see the continued development of principles for ethical and responsible AI by regulatory and industry bodies around the world, addressing issues such as fairness and bias. Businesses will be expected to educate their employees and document their ongoing consideration of ethical issues when developing or using AI.
Increased enforcement activity
- Reputational concerns will become increasingly important in the face of growing enforcement activity. Having laid out their principles, we anticipate that regulators will shift towards enforcement action, to demonstrate the cost of not complying with all existing AI law and regulation.
- Big tech remains high on regulators’ agendas. We expect increased enforcement action against large data-heavy organisations, as well as those in financial services and beyond that rely on, or work closely with, data-driven businesses. Antitrust, data and industry-specific or sectoral regulators (e.g. global financial or cybersecurity regulators) will all be bringing charges.
AI disputes in court
- We will see more cases heard in local courts – this is critical in building out the limited existing body of case law. For example, in the UK, Tyndaris v VWM will go to trial in mid-2020. This will be the first time the English courts consider who is liable when an AI-powered investment system has allegedly caused substantial losses.
Protecting inventions created with AI
- We expect that in 2020 patent registries will focus on whether inventions created by using AI can be sufficiently described for patent protection to be granted. This stems from the “black box” nature of AI; the more advanced systems become, the harder it is to explain how an invention is implemented, and this is a key requirement for patentability. Due to AI’s increasing creativity, establishing the owner of intellectual output created by AI may also become increasingly difficult.
- The debate concerning AI as an inventor for patent applications or as an author for copyright works will continue.
AI audits will be expected
- The calls for more extensive auditing of AI (both internally within organisations and by external bodies) are likely to grow, given the matrix of technology (and third parties) involved in many AI systems.
- We recommend that companies assess how and who will monitor their use of AI and escalate any issues to the Board on an ongoing basis.