UK can lead the way on AI
Put ethics first says Lords Committee
21 May 2018
Artificial intelligence (AI) provides a host of opportunities to make our lives easier. The UK already boasts several strong AI companies and world class research institutions, and therefore has a chance to really help develop this area.
On 29 June 2017 the House of Lords appointed a committee “to consider the economic, ethical and social implications of advances in artificial intelligence”. The report, published in April 2018, concludes that the UK is in a strong position to be amongst world leaders in the next stage of development for artificial intelligence, and provides recommendations to help do this.
- establishing a cross-sector AI code;
- investing in training for new AI jobs;
- ensuring data used by AI systems is not monopolised;
- making sure technology is developed on ethical guidelines;
- updating legislation and regulations so these are sufficient for AI.
Cross Sector Code
The proposed code would include five principles that would: preserve the intelligibility and fairness of AI whilst protecting the privacy and data rights of individuals and promote the education of those individuals in relation to AI; and would prohibit autonomous systems that could hurt, destroy or deceive human beings. The committee hopes the code would be adopted both nationally and internationally.
Training for AI Jobs
The report calls for the Government to encourage greater diversity in the training and recruitment of AI specialists. This is because AI will both create some jobs and make others negligible. Therefore, the job market needs to be ready to adapt so productivity can be raised with the introduction of AI. The committee believes this should be done at earlier stages of education, and therefore ethical design and use of technology should become an integral part of the curriculum.
Avoiding Data Monopolies
On ensuring that data used by AI systems is not monopolised and technology is developed on ethical guidelines, the report calls on the government to use the Competition and Markets Authority to stop large technology companies operating in the UK (such as Amazon, Facebook, Google, Twitter and Microsoft) from dominating the control of data. Lord Clement-Jones stated, “We looked to see where it had been done well and done badly. GM Foods is where it was done badly, and human embryo research is where it has been done well.”
Technology developed on ethical guidelines
The report focusses on creating an ethical AI framework, probably because the UK cannot compete, in terms of investment, with the United States and China, as Lord Clement-Jones, chair of the Lords’ Select Committee on Artificial Intelligence, acknowledged. Instead he recognised, “Where we can compete is in the way that we co-ordinate our research and achieve agreement internationally on an ethical framework,” he said. “There are great opportunities, but we will not be able to take them unless we de-risk AI.”
Updating the law to keep pace with AI development
The report also recognises that the emergence of new technology presents a challenge for the existing legal and regulatory framework, especially when determining who should be held accountable for decisions made or informed by AI; decisions about receiving a mortgage, in diagnosing illness, or a decision taken by an automated vehicle on the road, for example.
There is no doubt that legal liability and regulation is a very large hurdle that AI will have to overcome before it is introduced on a larger scale; the committee has warned that existing liability law will not be sufficient when AI systems malfunction or cause harm and is therefore calling on the Law Commission to examine whether it needs updating. There needs to be legal clarification on who is held responsible for decisions made by artificial intelligence.
The Committee's report does not suggest the need for any specific AI regulation "at this stage". Instead, existing sector-specific regulators should assess whether the current regulation is sufficient.
The passing of the Data Privacy Bill and the coming into force of the EU General Data Protection Regulation (GDPR) will likely address many of the concerns involving the handling of personal data, a key element of AI. Therefore, despite entrepreneurs such as Elon Musk calling for proactive regulation of artificial intelligence, the report avoids proposing blanket AI-specific regulation and calls for a more reactive approach. This is because the ethics behind the decisions machines make are still being disputed, and therefore it is impossible to make hard and fast rules at this time.
It is clear that although AI has not yet taken over Britain, the government are looking to be prepared for when it does.
This article was written by Lara Hopkins, Trainee, TMT Group