Is this the End for Tech Self-Regulation?
UK 'Online Harms White Paper' published
12 April 2019
The UK Government Department for Digital, Culture, Media & Sport and the Home Department jointly published an Online Harms White Paper on 8 April, beginning a 12-week public consultation on the issue.
The white paper consultation sets out the UK Government's proposed approach to regulate companies that allow users interact with each other online to ensure that the same rights that people have offline are protected online. It is an attempt to deal with the skewed balance of power between the big tech platforms and ordinary citizens.
"The era of self-regulation for online companies is over" – Jeremy Wright, the UK’s digital secretary, commenting on the publication of the White Paper.
Current liability for illegal content
Under the current liability regime (derived from the EU’s e-Commerce Directive), platforms are protected from legal liability for any illegal content they ‘host’ (rather than create) until they have received a notification of its existence, or if their technology has identified such content, and have subsequently failed to remove it from their services in good time.
The existing liability regime only forces companies to take action against illegal content once they have been notified of its existence. It therefore does not provide a mechanism to ensure proactive action to identify and remove content. Indeed, it may even discourage active monitoring.
What is being proposed?
The White Paper, published in the wake of the terrorist attack live-streamed in New Zealand in March 2019, claims that the UK will be the first mover to establish a regulatory framework which will tackle a range of online harms - from terrorist content and child sexual exploitation to problems that "that may not cross the criminal threshold but can be particularly damaging" especially to children and vulnerable users. Examples of problems that may not necessarily be unlawful include disinformation, online manipulation, trolling, hate speech and cyber-bullying.
The new regulatory framework this White Paper describes will set clear standards to help companies ensure safety of users while protecting freedom of expression. It aims to promote a culture of continuous improvement among companies and encourage them to develop and share new technological solutions rather than complying with minimum requirements. It cites the example of a November 2018 hackathon, co-hosted by the Home Secretary and five major tech companies, which developed a new tool to tackle online grooming, and lists a number of online safety initiatives.
The plan is to introduce a new statutory duty of care, which will be overseen and enforced by an independent regulator, funded by the tech industry. This regulatory framework will not go as far as requiring active monitoring across all content, as this would put a disproportionate burden on these companies and raise concerns about user privacy. Instead, it would require companies to take reasonable and proportionate action to tackle harmful online content and activity of their services.
The rules will apply to a broad range of online companies that allow users to share or discover user-generated content or interact with each other online including:
- social media platforms (e.g. Facebook, Twitter, Instagram, YouTube);
- public discussion forums (e.g. Reddit, 8chan);
- search engines (e.g. Google, Bing); and
- messaging services (e.g. WhatsApp, Viber).
The duty of care approach will mean companies must improve their understanding of the risks associated with their services and take effective and proportionate steps to mitigate these risks. The regulator would have powers to require them to report annually on harmful content on their platforms and the measures they are taking to address the harms. The regulator would publish these reports to enable users and parents to make informed choices. The regulator would also have powers to require information about the use of automated decision-making algorithms for example to serve specific content to specific users.
The White Paper acknowledges that any obligations to monitor for tightly defined categories of illegal content should not apply to private channels, because of privacy concerns. It includes consultation questions on what should be considered private communications for these purposes.
The White Paper proposes for consultation that the regulator would have significant powers to ensure that all companies in the scope of the regulatory framework fulfil their duty of care, including the power to:
- levy substantial fines (possibly comparable to GDPR rules of up to 4% of the company's turnover);
- impose criminal liability on individual members of senior management;
- force third party companies to withdraw any services provided that facilitate access to the services of an infringing company e.g. app stores, or links on social media posts;
- block internet service providers of non-compliant websites from being accessible in the UK; and
- publish notices naming and shaming those that break the rules.
The finer details of enforcement will be set out by the regulator in 'codes of practice'. If companies want to fulfil this duty in a manner not set out in the codes, they will have to explain and justify to the regulator how their alternative approach will effectively deliver the same or a greater level of impact.
The White Paper proposes that the Government would have the power to direct the regulator in relation to codes of practice on terrorist activity, child sexual exploitation and child abuse. The regulator would also be expected to work with law enforcement on codes of practice relating to harms such as incitement to violence and the sale of weapons. One key decision still to be made is whether the regulator should be a new body, an existing one such as Ofcom, or a combination of the two.
Enforcement in an international context
The new regulatory regime will need to handle the global nature of both the digital economy and many of the companies in scope. The law will apply to companies that provide services to UK users, but the regulator’s powers will ensure that it can take action against companies without a legal presence in the UK. Where companies do not have a legal presence in the UK, close collaboration between government bodies, regulators and law enforcement overseas, in the EU and further afield, will be required.
The White Paper proposes that the regulator may require companies which are based outside the UK to appoint a UK or EEA-based nominated representative. This is similar to the concept of nominated representatives within the EU’s GDPR. However, under GDPR, the extent of compliance by companies based outside the EEA is still relatively untested.
The White Paper is a broad-based attempt to tame the "Wild West Web" across a range of harms. TechUK, the trade body, called the White Paper a "significant step forward" but added that making directors liable for content, or blocking providers, were "heavy handed" approaches. Other commentators have characterised the proposal as the death knell to free online expression or voiced concern that whilst major tech companies can easily foot the bill to ensure compliance, it may unfairly hinder start-ups.
The White Paper acknowledges the need to collaborate internationally, both for enforcement purposes and to share best practice and avoid unnecessary inconsistency in approach. While the United Kingdom seeks to position itself as a leader in tackling online harms, other jurisdictions have also been active in this area. Australia passed draft laws on 4 April 2019 aimed at tech firms who fail to remove 'abhorrent violent material' (defined to include terror attacks, murders, rapes, tortures and kidnappings). The time frame for removal of such material is 'expeditiously', a term left undefined. Company executives or those who "provide a content service" could face up to three years in jail and corporates could face huge fines of up 10% of annual turnover. Europe has also been engaged on this issue and has published a 'Fake News and Online Disinformation' policy and an 'Action Plan against Disinformation' in a bid to tackle the spread and impact of online disinformation in Europe.
It could take up to two years for the White Paper to be converted into law in the UK. An interesting question is whether the White Paper will still be relevant in 2021, given how quickly technology and the internet can evolve.