Go back to menu

Bad bots and black boxes

Slave to the algorithm

24 November 2017

From self-driving cars to virtual personal assistants, recent mainstream media coverage of artificial intelligence has tended to focus on a somewhat clichéd vision of AI. However, while very real progress is being made in those arenas, it is arguably AI’s less visible applications that are more meaningfully altering how our modern societies function. The phrase “artificial intelligence” has become something of a catch-all for any kind of futuristic-sounding technology. Most of the time, though, what people are really talking about when they refer to AI is machine learning – and, more specifically, a subset of that technology known as deep learning. While the science behind deep learning is complex, even just a rudimentary understanding of it can help to reveal the difficulties AI gives rise to.

Deep learning involves very large groups of artificial cells (which are, in practice, simply mathematical functions), arranged in layers and exposed to masses of data. These cells send the data through an algorithm which produces an output signal to be fed into a subsequent layer of cells. The process then repeats. Each time this happens, the calculation is refined. Deep learning can essentially be thought of as a way for machines to ‘train’ themselves to recognise patterns, from which decisions can then be made. 

Who’s invited to a job interview? What price is your petrol? Who gets a loan? These are all questions that are increasingly being outsourced to such algorithms. The opportunities here – both for cost-saving and better informed decision-making – are considerable. However, the complexity of the interactions between the different cell layers makes it extremely hard to untangle the reasoning of such systems, and this creates significant legal risks. 

Automated collusion

One of the most prominent uses of artificial intelligence to date has been in dynamic pricing systems – programs that monitor a variety of market factors to inform a seller’s pricing strategy. Not only can they reduce the cost of labour spent on monitoring competitors’ pricing movements and other market behaviour, but they can perform these tasks at a superhuman level. Bought anything online recently? There’s a good chance that the price you paid was determined, at least in part, by an algorithm. 

It is hardly surprising to learn that such technology can (and has) been used for illicit ends – the most notable example being where algorithms were used by competitors conspiring to inflate the prices of posters on an online marketplace. Here, the human intention to collude was overt, and the algorithm was merely the vehicle through which that intention was effected.

However, more complicated legal questions arise where algorithms, programmed simply to profit maximise, determine that the optimum outcome is achieved by bypassing a price war with competitors and instead opting to raise prices in unison. Is this simply ‘intelligent adaptation’ as permitted under EU law? Or does it go beyond what regulators are willing to stomach? Where there is a lack of explicit human intention to collude, the legal and regulatory position becomes hazier. The difficulties involved in compliance and investigation are compounded by the lack of system transparency. If we’re not entirely sure how the program reached its decision, how can we assess its legality? Cartels that were once conceived in smoke-filled rooms may now be born in black boxes, and this creates issues for everyone – from consumers to retailers to regulators.  

Algorithmic bias

In addition to the concern that the use of algorithms might have drawbacks for consumers at large, there is also a threat of particular demographics being prejudiced. Whilst part of the great hope for artificial intelligence is that it will remove human emotion from decisions and therefore won’t fall victim to bias, this assumes that such biases won’t be inadvertently programmed into machines by their creators. Systems that determine creditworthiness, for example, might be skewed against a certain demographic where it is trained on a dataset in which loans were unfairly distributed in the first place.

Some contend that algorithmic discrimination of this nature has already taken hold, specifically in the context of courts. In a number of American states, AI systems are used to determine the likelihood that defendants will reoffend or fail to show up for their court dates, which is then factored into decisions about length of sentence or the provision of bail. An investigation by American non-profit ProPublica argued that these systems are biased against black defendants and, while this was disputed by the programmers behind the system, the fact that it runs on a proprietary algorithm means it is unable to be properly scrutinised. Even if access to the code was granted by the developers, the opacity of machine learning systems means it's unlikely we'd be able to make sense of it.   

The use of machine-based decision-making is increasing all the time in a variety of settings and, where discriminatory algorithms are scaled-up or transposed into different contexts, there is a risk of in-built biases becoming endemic. Since the nature of deep learning means we cannot simply open up and look inside an artificial cell network to assess the neutrality of its reasoning, ensuring compliance with anti-discrimination law will become very difficult. 

Reducing risk exposure

With machine learning technologies now touching all sectors of industry, these risks should be everyone's concern. The traditional 'technology company' is no more – any business relying on systems with a machine learning element must be cognisant of the dangers involved. There is plenty that businesses can (and, indeed, should) be doing to futureproof against risk exposure, from performing due diligence on existing AI systems to scoping potential legal issues in the deployment of new ones. While the future regulatory landscape for AI may be unclear, one thing is certain: it is no longer an option to sit back and hope for the best.

Thanks to London TMT Group Trainee Jumani Robbins for his contribution to this article.