Go back to menu

Facing up to the risks and rewards of facial recognition

The use of automated facial recognition technology by the police has come under fire. But it’s not just law enforcement that should be taking note of its dangers

14 December 2018

Nearly 125 years ago, a Parisian police officer published a book containing hundreds of images of body parts – tables upon tables of ears, eyebrows and mouths – together with meticulously crafted instructions for how to properly measure and compare them. While this might be a far cry from what we now associate with the field of biometrics, Alphonse Bertillon’s Signaletic Instructions was an early milestone in the quest for more accurate ways of identifying individuals. His work would come to influence the approach of police forces across the globe to criminal identification. Fast forward to the present day and that quest continues, now aided by emerging technologies.

What is automated facial recognition technology?

Artificial intelligence is being used to perform fundamentally the same process of facial feature analysis and comparison that Bertillon envisaged, but at an eye-watering rate and scale. This AI-driven technology, known as automated facial recognition (AFR), is in its relative infancy. However, its impact has already been felt in a number of fields, including law enforcement, financial services, social media and healthcare.

What's the issue?

It has been argued that the technology in its current form disproportionately misidentifies women and ethnic minorities and unlawfully interferes with rights to privacy. As a result, the use of such technology has come under intense scrutiny. In the UK, for example, the deployment of AFR-equipped vehicles by the police (at events ranging from Notting Hill Carnival to the UEFA Champions League final) is currently subject to multiple judicial review challenges by civil liberties groups.  As individuals become increasingly live to their privacy rights, such challenges are unlikely to subside.

It is not just law enforcement that ought to be taking note of these developments. Facial recognition technology has an immense potential to transform a number of areas of day-to-day life and be utilised positively in a variety of industries. By 2022, the global facial recognition technology market is projected to generate an estimated $9.6 billion in revenue.

These are eye-watering numbers. But, despite this the risks of AFR also need to be meaningfully considered by any organisation that looks to exploit it.

Bias – how can automated facial recognition systems create (or worsen) biases?

The problems associated with bias in artificial intelligence are, at this point, well-established. The ability of intelligent systems to operate reliably and equitably is innately linked to the integrity of the underlying dataset on which such systems are built and the processes by which they are trained, and the biases and ethics of their human trainers. In the context of facial recognition it has been shown that AFR software performs differently on different demographics, likely as a consequence of imbalanced training data. The result is a high rate of misidentification, particularly acute among ethnic minorities and women.

It is not difficult to imagine how problems could materialise in practice. One growing commercial application of AFR is by banks and financial services companies as a means of authentication. Customers wishing to access their finances remotely can look into their phone camera or webcam, either instead of or in addition to entering a password, and be verified as the rightful owner of the account. However, where the facial recognition technology is biased, there is a risk that customers of a particular demographic or appearance will be shut out of their accounts and temporarily prevented from accessing their finances. At best this is inconvenient; at worst, it threatens to impose a kind of digital foreclosure on honest customers. This would be at the hands of businesses, with minimal control for the individual.

Trust is currency

To ensure that individuals trust AFR, businesses will need to build in a transparent workflow, and ensure they can account and justify for decisions made by AFR that they use.  Getting this wrong not only risks a breach of legal obligations, but also threatens to deter consumers from using efficiency-creating forms of verification, instead pushing them back towards traditional verification methods that they are familiar and comfortable with.

Know your supply chain

Security is another aspect of everyday life in which the use of facial recognition technology is growing, with a number of large retailers exploring whether it can help prevent misbehaviour in stores. It has been reported that the retailers will upload photos of certain people they want to watch, such as known shoplifters or disgruntled former employees, in the hope of identifying and removing them from the premises. Again, a facial recognition system trained on a biased dataset will be prone to misidentifying innocent shoppers as undesirable members of the public. Aside from being potentially problematic in a legal sense, such issues are simply bad for business.

Despite the technical complexity involved here, there are some clear questions that emerge:

  1. How can it be ensured that AFR is deployed in a way that does not breach the rights of individuals or give rise to liability on the part of the user?
  2.  How can it be done so as to not risk damaging customer relationships?

These are delicate issues that need to be sensitively discussed and negotiated with relevant actors in the supply chain. Suppliers of the technology are those best placed to give assurances as to the integrity of the datasets and training processes, and it is important that the right conversations take place between buyer and supplier, backed up by contractual protections where appropriate.

Privacy – what are the key implications for AFR?

It is clear that the nature of the data that AFR processes has far-reaching implications for privacy.  Given the severe penalties now available to data protection authorities under the new European regime, the stakes have never been higher for organisations seeking to utilise such technology.

The nature of the artificial intelligence underpinning AFR technology makes it difficult to scrutinise exactly how it generates its outputs – but the implications for businesses go far beyond understanding the inner workings of AFR.

  • Under the GDPR, biometric data is recognised as a special category of data.
  • Biometric data is afforded a higher level of legal protection – organisations using it take on a higher compliance burden to use the information lawfully.

But businesses have many reasons to engage with AFR, and utilise it to unlock the value of underlying datasets. Significant developments have taken place in the healthcare arena, where it is possible to track patient use of medication, gauge pain levels and even detect genetic diseases through automated analysis of facial features. The promise of such applications is clearly huge, but companies providing these kinds of services find themselves operating at an especially sensitive intersection of health and biometric data.

Building super-profiles – facial recognition and profiling

Similarly, where facial recognition technology is used in conjunction with profiling, as is becoming increasingly common in the retail and marketing sector, organisations must be more conscientious than ever before:

  • Laws like the GDPR specifically regulate automated decision-making – i.e. making a decision solely based on use of tools like AFR, without human involvement.
  • The rules also tightly restrict unfettered use of AFR technology for profiling purposes – e.g. automatically processing personal data to evaluate how people might behave when exposed to certain stimuli.
  • Organisations need to provide information about the activity and give individuals simple ways to request human intervention or challenge a decision – for example, if they don't understand or agree with AFR use involving automated decision-making or profiling.

The long-term potential of the technology to enhance the customer experience and business decisions is therefore not unfettered – and we can only expect other local laws to align with this approach.  The appeal of tailoring content to particular individuals based on their reactions or recognising members of customer loyalty programmes upon their arrival at a store cannot be ignored. But it comes with risks. Among other dangers, having custody of such sophisticated and extensive data profiles makes organisations a prime target for cybercrime, the repercussions of which may be even more detrimental in a reputational sense than a financial one.  

In order to avoid these pitfalls, appropriate safeguards need to be installed and the power to decide how facial recognition data is treated needs to be placed squarely in the hands of data subjects. Organisations should put privacy front and centre in any proposed deployment of AFR – doing everything from ensuring there is a lawful basis for the processing, to developing meaningful privacy notices and an appropriate retention policy, to maintaining adequate technical and organisational measures to prevent data being compromised. Systems of ongoing audit are critical.  The days of ticking a compliance checklist and filing it are gone.  Active engagement is not only a compliance requirement, but a business asset.

Looking forward

As with so many emerging technologies, facial recognition promises much but is not without hazard. Its use in law enforcement has proven particularly controversial, but its deployment across other industries raises related issues that must be fully considered by companies seeking to reap its benefits. Organisations should resist ‘knee-jerk’ implementation of facial recognition in the short-term if it comes at the expense of properly contemplating these questions. Ultimately it is those who pause to nurture the technology in a sensible way and ensure the rights of the individual are respected that will come out on top.

Written by Jumani Robbins, Trainee.