ICO to fine Clearview AI £17 million over the use of its facial recognition tech
Fine would be third largest corporate fine to date
03 December 2021
After opening a joint investigation with the Office of the Australian Information Commissioner (OAIC) in July 2020, the UK’s information regulator, the Information Commissioner’s Office (ICO), announced on 29 November 2021 that it had issued a provisional notice of intent to fine Clearview AI just over £17 million. This fine would be the third largest imposed by the ICO, following the fines against British Airways and Marriott in 2020.
As we explained in our post analysing the initiation of the investigation, Clearview AI has reportedly amassed a database of as many as 3 billion images scraped from popular social media sites such as Facebook, Twitter and LinkedIn. The firm's service allows users to upload photographs of people and then its facial recognition AI will match them against its database and provide links to where the individuals appear online.
Clearview markets its software tools to law enforcement agencies, with the service reportedly used by 600 such agencies across the globe. In its announcement, the ICO acknowledged that "the service provided by Clearview AI Inc was used on a free trial basis by a number of UK law enforcement agencies, but that this trial was discontinued and Clearview AI Inc’s services are no longer being offered in the UK."
Apart from the UK and Australian investigations, Clearview has been subject to regulatory scrutiny in Canada and is currently facing civil lawsuits in the U.S. alleging that Clearview violated Illinois' Biometric Data Privacy Act.
The ICO's announcement comes after the OAIC found last month that Clearview had violated its privacy laws and required the company to cease collecting images of individuals in Australia and to destroy any such images it had previously collected.
The OAIC is the information regulator of Australia. Entities under its jurisdiction are bound by the Australian Privacy Principles (APP), which are formed by the Privacy Act 1988 (PA 1988), and known as APP entities.
The OAIC found that Clearview collected facial images and biometric templates without consent in breach of numerous privacy obligations under PA 1988, and held it to be liable as an APP entity. The OAIC decision acknowledges the lack of transparency and consent surrounding collection practices from public sources, including personal data scraped from social media sites, the monetisation of individual's data for a purpose entirely outside reasonable expectations, and the risk to people whose images are included in Clearview AI's database.
The joint investigation between the OAIC and the UK's ICO demonstrates that global data regulators will work together on issues that impact data subjects in multiple jurisdictions, thereby exposing firms like Clearview to parallel enforcement action in multiple jurisdictions.
Clearview AI is incorporated in Delaware in the United States and asserted that it is not subject to Australian jurisdiction and therefore cannot be regulated as an APP entity. However, the firm did admit it provided trials and demonstrations of its products to Australian police agencies at the request of those agencies. The OAIC found that this meets the PA 5B(3)(b) threshold (regarding extra-territorial operation) of having an 'Australian link'. The decision cites both the marketing and soliciting of services with the aim of procuring subscriptions from Australian police forces during a trial period from October 2019 to March 2020, and the continuing indiscriminate scraping of facial images from the internet in reaching its decision.
The OAIC decision states that during the trial period Clearview AI collected 'Probe Images' that were uploaded to the facial recognition tool by Australian users, including images of suspects, victims of crime and members of Australian police agencies who searched for themselves and individuals known to them. The Commissioner's report concluded that this fulfilled PA 5B(3)(c), which stipulates that the personal information was collected or held by an organisation or operator in Australia or an external territory.
Clearview AI has been ordered to cease all data collection and disclosure of Australians' matched images to its registered users, and to destroy all scraped images, probe images, scraped image vectors, probe image vectors, and opt-out vectors that it has collected on Australian individuals in breach of the PA 1988 within 90 days of the OAIC determination.
The OAIC has the power to apply to the Federal Court for a civil penalty to be issued against serious or repeated privacy breaches, which currently is a penalty of up to A$2.22 million for corporations (for each serious and/or repeated interference with privacy). However, it has not issued a pecuniary sanction in the Clearview AI decision.
The order to purge all records of Australian images and vectors could be an onerous task for a company that advised an Australian police agency that it hoped to have 30 billion images indexed by the end of 2020. It will be even more challenging if there is a lack of data on nationality in the database, as stated in its OAIC response.
The ICO, in is press release summarizing its preliminary findings, determined that Clearview failed to comply with UK data protection laws in the following ways:
- failing to process the information of people in the UK in a way they are likely to expect or that is fair;
- failing to have a process in place to stop the data being retained indefinitely;
- failing to have a lawful reason for collecting the information;
- failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);
- failing to inform people in the UK about what is happening to their data; and
- asking for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed.
Based on these violations, the ICO is proposing a fine of just over £17 million and a requirement that Clearview stop collecting the data of people in the UK and delete any such data that has already been collected.
Clearview now have the opportunity to make written representations to the ICO with regard to the imposition and amount of the proposed fine, after which the ICO will reach its final decision. To date, the final GDPR-related fines issued by the ICO have been reduced from the amount proposed in the notice of intent. For example, the final fine imposed on British Airways was £20 million, compared to a proposed fine of £183 million; and the final fine imposed on Marriott International, Inc was £18 million, compared to a proposed fine of £99 million.
Clearview are likely to make similar representations on jurisdictional and definitional arguments to those raised with OAIC, with the ICO. Clearview sought to argue in Australia that it was not collecting personal data. However, the English courts have already considered facial recognition technology and in the 2020 South Wales Police judgement determined that the technology did constitute processing of biometric data, highlighting that "like fingerprints and DNA, it is information of an intrinsically private character – biometric data" and was governed by the Data Protection Act 2018.
Automated facial recognition software, particularly in a law enforcement context, has been a key area of focus for the ICO and regulators around the globe over the past couple of years. It raises a number of competing considerations, including the benefits for national security, the potential conflict with fundamental rights, and whether a distinction should be drawn between use by law enforcement agencies and private companies.
The EU, amongst others, is grappling with these issues following publication of the European Commission's draft EU AI Act in April 2021. The draft act provides for expansive extraterritorial scope and would apply to any providers (including those outside the EU) who place AI services into EU markets. It bans the use of biometric identification systems in public spaces for law enforcement, except for specific uses such as fighting terrorism. The European Parliament has since called for a full ban on police use of facial recognition technology in public spaces, as well as a ban on private facial recognition databases such as those operated by Clearview.
In the UK, there is no instrument that solely regulates facial recognition technology, so it is necessary to consider the framework of applicable legislation, including the GDPR and Data Protection Act 2018. The Clearview enforcement action provides the ICO with an opportunity to comment on data compliance issues associated with the technology when employed by a private company.
Many firms are considering facial recognition technology – either as a result of the pandemic and a desire to have "touch free" access and/or in smart devices. The UK and Australian enforcement actions highlight the importance of ensuring that companies have appropriate notice and consent from data subjects, as well as maintaining adequate privacy compliance programs, and the final Clearview findings will serve as a good case study. The ICO has also shown that it will apply higher data protection standards to technologies that process biometric data. It remains essential for companies to know exactly what types of data they are processing, in order to identify and implement the correct requirements.
Depending on the nature of the final decision, the ICO's action may impact AI or machine learning applications other than facial recognition technology if they are trained on data sets culled from publicly available websites, particularly when the data contain personal or sensitive information about data subjects.