Artificial Intelligence Risk
Part Two - Assessing your risk
05 February 2020
Artificial intelligence (AI) creates huge opportunities for businesses globally across all sectors. However, the use of AI also brings the potential for significant legal, ethical and reputational exposure. Recently we have seen global regulators grappling with these risks, with the announcement of a vast number of legal developments, including the publication of legislation and ethical guidelines. Civil courts have been active too, with several recent judgments addressing liability for AI.
In this four part series of articles we consider some of the emerging global themes and highlight recent international developments. We also explore what 2020 is likely to mean for the global regulatory outlook and the steps businesses should be taking now to minimise the risks arising from their use of AI.
In this second part we look at assessing the risks.
How to Assess your AI Risks - Practical Issues for Global Businesses
AI is often used across different business lines, geographies, jurisdictions and technology teams and is not necessarily readily accessible or adequately explained to those with the usual control functions in institutions (such as legal, compliance and internal audit). This creates risk for institutions. That needs to change.
Tools for Responsible AI Managment
Governance and accountability
In our experience, the starting point for proper governance of AI is clear leadership, from companies’ boards and a hybrid team of general counsels, senior data, compliance, risk and policy officers overseeing AI risk management and embedding a culture of transparent and ethical use of AI.
For many businesses using AI, a stand-alone framework detailing their approach to AI and incorporating applicable global regulatory requirements and international norms (such as human rights responsibilities) will be crucial and should be a priority. This framework may be the first document requested by regulators, litigants and politicians, if any AI issues arise. It can also help to demonstrate a business’s proper understanding of its AI use and its active management of AI risk.
For businesses which are more extensive users of AI (or which may have more to lose reputationally from a particular use of AI), a dedicated AI and/or data board to ensure that consideration is given to legal, reputational and ethical AI risks and that suitable AI policies are adopted and adhered to internally may also be appropriate. AI boards should not just be used for PR purposes. A careful evaluation of how, whether and why an AI board is appropriate is a valuable first step.
“Due diligence and regular audit – both of the AI’s functionality and the apportionment of contractual and tortious liability between programmers, suppliers, the company and/or its clients in standard terms of business – should be a top priority. Organisations that fail to monitor and take action face regulatory sanction and unexpected civil exposures when the consequences of AI usage are tested in court.” - Kate Scott, Partner, London
There is no “one-size-fits-all” solution for addressing AI risk. To assess which tools are appropriate and how they should be used, we recommend an AI risk audit.
Due diligence of AI usage
To minimise AI risk, businesses need to assess their use of AI from supply chain to clients, encompassing AI technology that has been bought (from suppliers or via M&A) or built in-house. This should include:
- Data: Where does it come from and do you have the right to use it? Is any bias in the data inputs being addressed?
- Transparency: How is data being used and decisions made? How is that communicated to stakeholders?
- Explanations: Is there a written explanation of the AI’s functionality? How and where is it documented and is it up to date?
- Review: Are you monitoring and/or testing the AI’s decision-making and its impact? To what extent is there human oversight?
- Limits: Have boundaries been set regarding use of AI? Who could be harmed by its use? Are there uses that you will not countenance? Is there an off-switch?
- Liability: What is the contractual framework for the use of AI? How is liability apportioned between the business and its suppliers and/or customers?
Consideration of existing policies/control frameworks
Businesses will typically have many existing policies that may contemplate or impact the use of AI; for example GDPR compliance, human rights policies, competition policies, codes of conduct and new product approvals. Where policies already contemplate AI use, are they consistent? If they are not, do they need amending?
Businesses need to determine and document management responsibility for their use of AI, with a clear governance structure and a consistent approach. Senior managers need to understand the technological capabilities of AI and how to challenge its operation.