Go back to menu

The direction of Facebook and other social media platforms

Increasing regulatory scrutiny

18 July 2019

Global media has recently been devoting significant attention to the series of scandals involving Facebook, undoubtedly the world's most popular social platform. In a nutshell, issues concerning customer data breaches, potential influencing of public opinion and the publishing of illegal content, such as offensive, violent or racist comments, have strengthened the calls for stricter regulation not only of Facebook, but also of other internet platforms that many of us use every day. For those who have not been following this topic closely, this article summarises some key issues contained in recent developments.

Facebook and Google are often referred to as superpowers because of their ability to control the vast majority of privately owned public spaces. The content they offer can reach millions of users in a short space of time, allowing them to easily influence public opinion. Facebook in particular has been publicly considered to be a modern tool that enhances freedom of expression and information. However, some commentators argue that this tool may be being misused in a way that brings more harm than benefits. 

The problem, these commentators argue, is that these platforms are exclusively built on a profit-based model that is entirely lacking in the moral and legal accountability mechanisms that are in place for traditional media in spite of their significant influence on the public sector. In light of this, some commentators argue that the platforms do not have a formal accountability mechanism - their owners and managers are not accountable to their users or the public; only to their own shareholders. In other words, although they do not have the formal authority of sovereign States, their capacity to enable or limit freedom of information and expression is significant, and perhaps even greater than most States.

In early 2016, the Guardian and The New York Times reported that Facebook's trending topics section, which is a list of popular articles selected by algorithms and by staff, was allegedly being moderated to favour of liberal viewpoints. Although these allegations were at first denied by Facebook's representatives, Facebook subsequently announced that it would take precautions to minimise risks involving human judgment. Later, Facebook also took several steps to prove that it is a neutral platform that does not favour any particular views (for instance, it has clarified its guidelines and reviewers have undergone refresher training). 

Although an approach of non-interference might previously have seemed to be the preferable solution, some commentators argue that subsequent developments suggest otherwise. Facebook came under intensive public scrutiny after the United States presidential election in in November 2016, when Facebook's representatives had to deal with allegations that their neutral platform was being misused to promote fake news and influence the election. 

In October 2017, representatives of Facebook, Google and Twitter were questioned before the United States Senate Judiciary Committee. Facebook stated that content that spread disinformation about the election was linked to Russian fake accounts governed by the Internet Research Agency – a Russian company publicly regarded as being engaged in online operations to influence opinion on behalf of Russian interests. Such disinformative content was thought to have reached more than 126 million people, which is equivalent to about half of eligible US voters. Moreover, Facebook counted more than 3,000 advertisements also linked to the Internet Research Agency. 

Some commentators question how no one at Facebook recognised that it was being misused. Either the algorithms became so complex that management failed to keep up with them, or management chose not to step up its efforts in monitoring income derived from foreign sources. 

Some US politicians called for tech companies to invest more in human capital to proactively create frameworks to prevent meddling. A reaction was also forthcoming from the private sector. The New York State Common Retirement Fund, which is the third largest public pension fund in the US with assets of approximately US$200 billion, and activist investment firm Arjuna Capital filed shareholder proposals pushing Facebook and Twitter to take more responsibility for managing the content on their platforms.

In 2017 Facebook announced new tools to prevent misuse of its platform, such as partnering with fact checkers, increasing transparency over political advertisements and hiring 10,000 new moderators to oversee its content. Nevertheless, some critics believe there is a fundamental issue with the platform - it invests far more time and energy in building algorithmically controlled features meant to drive user engagement, or give more control to advertisers, than it does thinking about the social and cultural implications. These voices argue that the public can never fully understand and control Facebook unless it knows how its algorithms work. However, Facebook will hardly be willing to publicise what constitutes the heart of its business.

This issue may also concern other internet platforms. For instance, the Guardian published an article based on insights of a former Google employee, who claims that the main purpose of Youtube's recommendation algorithm is to attract and retain customers' attention. As such, it offers videos aimed at getting customers to spend more time online. This does not inevitably mean that the Youtube's algorithm is biased; some experts maintain that it is merely providing users with what they want to see. The problem is that many individuals would prefer to spend more time watching sensational or conspiratorial content than serious content – by the same token, tabloids traditionally sell more than broadsheets. This risks that the content chosen by algorithms for each user may not be of a serious or unbiased nature.

Back to Facebook, in January 2018, Facebook announced that it was changing the algorithm of its News Feed. As a result, its News Feed was to display less content from businesses, brands and the media and more content provided by friends and family. 

However, this has not stopped the wave of regulation that is already underway. On 1 January 2018, Germany introduced the new German Network Enforcement Act. This act requires large social media platforms such as Facebook, Instagram, Twitter, and YouTube, to block or remove “illegal content,” as defined in 22 provisions of the criminal code, ranging from insulting public office to actual threats of violence, within 24 hours, any breach of which will be punishable by fines of up to EUR 50 million. This has been criticised by several human rights and media freedom organizations. For instance, the Global Network Initiative, composed of various NGOs, academics, and companies (including Facebook and Google), has said that the law outsources decisions about freedom of expression to private companies, leaving users with no judicial remedy or right to appeal.

The German Network Enforcement Act seems to be inspiring certain other countries worldwide. For instance, similar draft laws and guidelines to regulate online content – referred to as "the German way" – are being discussed in Russia, Singapore, the Philippines, Venezuela and Kenya. The French and UK governments have been developing a plan to improve the identification and deletion of illegal content, while the European Commission has encouraged social media platforms to prevent illegal content from appearing online by laying down a set of guidelines and principles in September 2017 to step up the fight against such illegal content.

Placing specific obligations on tech companies to regulate potentially illegal content may be seen as first step in imposing a special regulatory framework. However, some commentators argue that this may not be sufficient, as content that is not necessarily illegal (typically fake news), may also prove to negatively influence public opinion.

In the US, the Honest Ads bill appears to be one of the first steps being taken. This bill, which was introduced in October 2017 in response to the attempts to influence the 2016 US election, has only recently begun to garner more attention. It targets digital platforms that have at least 50 million monthly visitors and orders them to maintain a public file of all electioneering communications purchased by a person or group spending more than US$500 on ads published on their platform. The file should contain a digital copy of each advertisement, a description of the audience targeted, the number of views generated, the dates and times of their publication, the rates charged, and the contact information of the purchaser.

Meanwhile in Europe, the European Commissions, fearing the influence that disinformation might have on the 2019 European Parliament elections, has approached social media giants with a proposal for a voluntary system of self-regulation. According to the Financial Times, the European Commission intends to come up with proposals setting out a code of practice to encourage platforms to step up their efforts to close fake accounts, flag sponsored content on their sites and work closely with independent fact-checking organisations. By all accounts, the European Commission is also prepared to come up with new legislation in case this voluntary approach fails to bring results by the end of the year. Elsewhere, the French government is drafting legislation that would allow judges to order the deletion of false online content during sensitive election periods.

The potential for another round in the regulatory offensive against social media platforms may force them to disclose how their algorithms work, some commentators argue. The first shot could be fired by the European Union. According to the Financial Times, the European Commission also intends to force tech companies to provide companies with more information on how their ranking algorithms work and provide assurances that their ranking is being compiled in good faith. At first, the draft proposal focused on relationships with online platforms such as Amazon and Apple, but the Commission has since decided to widen its net to include Google, as the provider of an important search engine. The draft legislative document has not yet been officially published. This initiative concerns the antitrust sector. However, it demonstrates that the times may be changing for tech companies.