Go back to menu

Lords report on AI algorithms

Committee supports creation of new Centre of Data Ethics & Innovation to help oversee algorithmic decision making

24 May 2018

The House of Lords Science and Technology Committee has published its report, Algorithms in Decision Making. The report acknowledges the huge opportunities presented by algorithms to the public sector and wider society, but also the potential for their decisions to disproportionately affect certain groups. The report is released as the GDPR becomes effective, on 25 May 2018, and in the wake of the recent Facebook controversy centred around the algorithm used by Cambridge Analytica.

The Committee identified a few key areas where the government should take immediate action.

Centre for Data Ethics & Innovation

The report welcomes the Government's plans to create a Centre of Data Ethics & Innovation (CDEI). The Committee see CDEI sitting alongside the Information Commissioner's office (ICO) in overseeing the development of algorithms and the 'decisions' they make.

The government announced its plan to create the Centre for Data Ethics and Innovation in the 2017 Autumn budget. It stated its aim would be to "enable and ensure safe, ethical and ground-breaking innovation in AI and data-driven technologies. This world-first advisory body will work with government, regulators and industry to lay the foundations for AI adoption, which estimates suggest could benefit households across the UK by up to £2,300 per year by 2030 and increase GDP by 10%".

The Government re-affirmed its commitment to create the Centre in January 2018, by launching a call for applicants to be the first chair of the interim centre. It then confirmed in April that it would provide £9m to create the Centre and that the interim Centre for Data Ethics and Innovation will start work on key issues straight away. Its findings will be used to inform the final design and work programme of the permanent Centre, which will be established on a statutory footing in due course.

A public consultation on the permanent Centre will be launched soon.

The biggest challenge the CDEI faces is striking the right balance between recognising and promoting the potential benefits to be derived from machine learning algorithms (for example in healthcare and other public service provision) and the privacy and data security risks they pose to individuals.

With regards to the latter concern, the report believes that some of the provisions in the new General Data Protection Regulation (GDPR), specifically, the opt-out from 'automated algorithm decisions, may provide protection.  However, there remains a grey area where individuals may not be aware they have been subject to an automated decision in the first place, or where decisions are merely rubber-stamped by a human overseer.

The CDEI and ICO should keep the operation of GDPR under review as far as it concerns algorithms and report to the Government by May 2019 on any areas where they feel the UK's data laws could be further refined. They should also review the extent to which the main sector-specific regulators exercise oversight of the use of algorithms in their sectors and assess if there is a need for the ICO to have greater powers to perform this regulatory oversight if the sector-specific regulators do not see this as a priority.

 

“The Centre for Data Ethics & Innovation should review the operation of the GDPR, but more immediately learn lessons from the Cambridge Analytica case about the way algorithms are governed when used commercially.

- Norman Lamb, Chair of the Science and Technology Committee
Government Big Data

The report calls on the government to continue to make public sector datasets available for both 'big data' developers and algorithm developers through new 'data trusts' and make better use of its databases to improve public service delivery.

The report calls on the government to instruct the Crown Commercial Service to commission a review from the Alan Turing Institute or other expert bodies to set out a procurement model for algorithms developed with private sector partners to ensure their full value is realised in the public sector.

 

Additionally, the government should appoint a ministerial champion and produce, publish and maintain a list of where algorithms are being used ' with significant impacts' within central government.

 

“The report highlights the huge social and economic benefits that can come from AI. But it also recognises the challenges around data bias, transparency and accountability that must be addressed. We agree that the Centre for Data Ethics and Innovation has a vital role to play in deepening understanding and developing policy on these issues. Time is of the essence and we want to see the Centre up and running as soon as possible.” -

- Sue Daley, Head of Cloud, Data and AI, techUK
Algorithmic Bias & Transparency

Whilst part of the great hope for artificial intelligence is that it will remove human emotion from decisions and therefore won’t fall victim to bias, this assumes that such biases won’t be inadvertently programmed into machines by their creators. Systems that determine creditworthiness, for example, might be skewed against a certain demographic where it is trained on a dataset in which loans were unfairly distributed in the first place. Or the data used may just be unrepresentative of those who may ultimately be affected by the algorithms use.

Transparency is clearly a key consideration where potential algorithmic bias is concerned.  The Committee calls on the CDEI to examine how a requirement can be made for explanations for how algorithms work can to be of a sufficient quality to allow the average person to be able to comprehend and challenge the decision of the algorithm.

The CDEI and the ICO should also review the current government position not to go beyond the GDPR's non-binding provisions and that individuals are not currently able to challenge the results of all algorithm decisions or seek redress for the impacts of those decisions.

 

Ada Lovelace Institute

The Lords report follows quickly on the heels of the announcement in March of the creation of the  Ada Lovelace Institute. This new institute set up by a coalition of organisations including the Alan Turing Institute, and the Nuffield Foundation was created in direct response to recommendations made in the Royal Society and British Academy’s 2017 report on Data Governance.

The purpose of the Institute will be to act as an independent voice, speaking on behalf of the public interest and society, informing thinking of governments, industry, public bodies and civil society organisations, in the UK and internationally. The Institute will:

  • Convene diverse voices to build a shared understanding of the ethical questions raised by the application of data, algorithms, and artificial intelligence (AI).
  • Initiate research and build the evidence base on how these technologies affect society as a whole, and different groups within it.
  • Promote and support ethical practices that are deserving of public trust.

It would seem likely that any future CDEI would look to draw on the output of this new institute.