Go back to menu

Artificial Intelligence Risk

Part One: The risks

05 February 2020

Artificial intelligence (AI) creates huge opportunities for businesses globally across all sectors. However, the use of AI also brings the potential for significant legal, ethical and reputational exposure. Recently we have seen global regulators grappling with these risks, with the announcement of a vast number of legal developments, including the publication of legislation and ethical guidelines. Civil courts have been active too, with several recent judgments addressing liability for AI.

“The opportunities that AI poses are clear, but too often the risks are oversimplified and underestimated.A s companies embrace innovation and use technologies to stay competitive, the ethical considerations, in addition to the legal requirements, need urgent consideration.” Dessislava Savova, Partner, Paris

In this four part series of articles we consider some of the emerging global themes and highlight recent international developments. We also explore what 2020 is likely to mean for the global regulatory outlook and the steps businesses should be taking now to minimise the risks arising from their use of AI. 

In this first part we look at the risks.

Despite the legal and ethical questions posted by AI, businesses are remarkably confident in their ability to understand and address the risks. In a recent survey of 200 board members from large businesses around the world carried out by the Economist Intelligence Unit on behalf of Clifford Chance, 88 per cent agreed (somewhat or strongly) that their board fully understands the legal, regulatory and ethical implications of their company’s use of data and AI. However, given that the  use of AI is nascent in most organisations, and given the ever-changing regulatory and legislative landscape, respondents’ claims that they fully understand the risks could be overstated.

The Risks

Over 60 sets of guidelines, reports and recommendations were issued by governments and industry bodies around the globe last year alone. While some consistent themes emerge, a standard set of international principles is yet to be developed. Businesses using or investing in AI must ensure that all applicable principles from this global patchwork of guidance are considered and reflected in new or existing policies, as applicable.

  • Many of the newly-published AI guidelines and ethical frameworks emphasise the importance of human oversight, governance and accountability. There is an emerging set of common themes centred on fairness, interpretability and robustness of AI. Businesses need to understand how their AI operates: how decisions are made and how to challenge its operation. Crucially, they must be able to explain this to employees, customers and regulators in an accessible and transparent way. The same standards apply whether AI is bought in or built in‑house.
  • In the EU, there has been an increased antitrust enforcement focus on businesses that use AI and algorithms, including decisions against Google’s advertising business and investigations into digital platforms. Businesses should consider the extent to which anti‑competitive issues arise when using or investing in AI, including the risk of AI systems unintentionally learning anti‑competitive behaviour as a way to achieve a seemingly “optimal” outcome.
  • Whether AI is used for autonomous vehicles, healthcare or financial services, the underlying data being used to develop and train the AI is key. Businesses must interrogate any datasets used in AI tools they develop, purchase or invest in. They must ensure data has been lawfully collected, is used appropriately and is not biased.
  • Intellectual property offices around the world have started to get to grips with the role played by AI in creating inventions. The trend to date has been to maintain the status quo that a human inventor is required; this may have a knock-on effect for the ability to protect AI-created inventions in the future.
  • Issues around liability for AI and algorithms, and in connection with human rights, are beginning to be explored by the civil courts. Businesses need to ensure that their contractual terms with customers and third-party suppliers of AI technologies remain fit for purpose and that they anticipate potential exposures. They should also assess the risk of adverse impacts on human rights arising from the use of AI.

“Companies and governments are competing for dominance in AI – a technology with the potential to impact every area of life, from national security to healthcare to financial services. In the 2020s, winners and losers will emerge. However, participants trying to advance too rapidly risk failing to properly evaluate and address social, ethical, and  legal issues.” - Megan Gordon,  Partner, Washington D.C