The Online Safety Bill

The Story of Goldilocks and the Three Approaches

By Wa’el Alanizi

Following England's loss to Italy in the Euro 2020 final, three young Black players — Marcus Rashford, Jadon Sancho, and Bukayo Saka — were subjected to horrendous racist abuse on social media sites. These are national heroes who our role models to our children. Unfortunately, this is something that happens on a regular basis. Whether this is towards LGBT or ethnicity people are voicing their opinions which equates to online harassment. One solution is to better educate your children, but this will take time. What can social media moguls do? Since the beginning of the internet, it seems to have escalated in recent years, with athletes of colour often being the targets. Will the Online Safety Bill provide a permanent solution?

The UK government has acknowledged the need for internet platforms to improve their site regulation. In October 2017, it released the Internet Safety Strategy Green Paper, with the goal of ensuring Britain is the safest place in the world to be online. This became the White Paper on Online Harms before being included in the proposed Online Safety Bill. The Joint Committee on the Draft is now reviewing the bill and is supposed to submit its conclusions with the goal of the Bill becoming law in 2023 or thereabouts.

This article will argue that the Online Safety Bill is both too restrictive to online freedom of expression, yet not strong enough to thoroughly tackle the issue of harmful online content; and the blind spots of older internet safety legislation and judicial precedents have paved the way to the provisions under the new Bill.

It was said in Handyside v United Kingdom (1979-80) 1 EHRR 737 [49] that: ‘Freedom of expression constitutes one of the essential foundations of such a society…applicable not only to 'information' or 'ideas' that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population…This means…that every 'formality', 'condition', 'restriction' or 'penalty' imposed in this sphere must be proportionate to the legitimate aim pursued.’

The Online Safety Bill and the debates surrounding have been long and a process of trial and error similar to Goldilocks who wanted a bowl of porridge that was just right. We find three schools of thought (and possible approaches to the Bill) in front of us: ‘Too much,’ ’too little,’ and ‘just right.’ For ‘too much,’ the Bill usurps the right to freedom of expression, in Article 10 ECHR enshrined in the Human Rights act 1998, that is fundamental to not only the safe haven of discussion and collaboration that the internet can be, but to democratic society as a whole, with which the internet is irrevocably intertwined. For ‘too little,’ the Bill cannot realistically do enough and many unchecked harmful internet comments and course of conduct can and will fall through the cracks of government and legal surveillance.

Through wrestling with both, we may eventually come to a balance that is ‘just right.’ This article will explore the possibilities of each of these views and outcomes, which are contrasting but not mutually exclusive, by setting the current law and the new Bill in context.

The Offences Proposed Under The New Bill

  1. ‘A ‘genuinely threatening’ communications offence, where communications are sent or posted to convey a threat of serious harm.’ (covers threats to kill, rape or inflict violence on targeted individuals or groups)

  2. A ‘harm-based communications offence to capture communications sent to cause harm without a reasonable excuse.’

  3. An offence for ‘when a person sends [or posts] a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm.’

Difference With The Malicious Communications Act 1988 and Communications Act 2003

The inclusion of ‘posted’ as well as ‘sent’ (rather than just ‘sent’) in order to cover internet communications such as message boards and social media feed posts, which do not have an intended recipient in the limited manner in which online communications would have done in the past. Social media forums and feeds accessible anywhere, which would have been an alien concept in 2003 (and practically science fiction in 1988), are now a ubiquitous and essential part of everyday life and online use that the previous legislation fails to cover. Online feeds are also a major source of information and news for the majority of people, therefore dangerous misinformation, posted that is known to be false and reckless as to the harm caused (such as hoax Covid-19 treatments), are an exponentially greater risk now than the era when the previous legislation was enacted.

There is a lower threshold to prove the elements of an offence under the new Bill. Under s127(1)(a) and (b) of the Communications Act 2003, the elements of an offence needed to be satisfied under vague, high-threshold terms. The matter of the communications sent needed to be ‘grossly offensive or of an indecent, obscene or menacing character’. This was problematic as this narrow interpretation is based on purely content, rather than context and intention. Under the new Bill, it is the intention of the sender/online content creator to cause harm, and the causation element of harm, that is of weight in the elements of an offence.

Judicial Treatment of The 2003 Act

In the run-up to the proposed draft Bill, the evolving importance of intention, rather than strictly the character of online content, can be seen in case law. The courts were starting to read intent in s127 of the 2003 Act. With the advent of Twitter and Instagram, the nature of online content was changing to reflect the whims and opinions of users in a public sphere rather than purely private communications. The court in Chambers v DPP [2013] 1 All E.R. 149 held that not only did a tweet by the appellant, saying he would be ‘blowing the airport sky high’, lack menace, but was clearly intended as a joke, even though this reading of ‘intent’ by the courts was outside the ambit of s127(1). It was as though the courts began recognising this defect in the original statute in a new society where internet communications were becoming more ubiquitous.

Further qualification in the existing law was needed with regards to internet content that was not necessarily ‘grossly offensive’, ‘menacing’ or ‘obscene’, but was sent ‘for the purpose of causing annoyance, inconvenience or needless anxiety to another’ (s127(2) of the Communications Act 2003).

The decision in Scottow v CPS [2021] 1 W.L.R. 1828 clarified this. On appeal by way of case stated, Warby J held that the decision of the lower court erred in its ruling by deciding that a series of tweets to the annoyance of the appellant fell under the ambit of an offence under s127(2). The prosecution sought to construct an offence by conflating provisions under the Protection from Harassment Act 1997; namely s7(2) and (3), with the current facts in order to assert that the appellants’ tweets amounted to a ‘course of conduct’ which was unreasonable and ‘caused distress’ to the recipient. This was founded on the basis that an injunction was sought following the events online ([2021] 1 W.L.R. 1828 [26]).

This was found to be ‘wrong in law’, as s127(2) is not a ‘harassment lite’ provision, but a means to only ‘prohibit the use of online services for ‘no other purpose’ than to ‘annoy’ or cause another user ‘inconvenience, or needless anxiety’ ([2021] 1 W.L.R. 1828 [32]). Notably, at paragraph [29], Warby J stated that he did ‘not consider that the mischief aimed at by Parliament when it passed s 127 of the 2003 Act was as broad as causing offense online.’ This highlights an issue with the 2003 Act, namely that such ‘mischief’ is a much more prevalent online issue than the purpose and scope of s127(2) gives credit to. The provision is narrow by virtue of the fact that these sorts of Twitter grievances did not exist and were not an issue at the time the Act came into force.

These judgements are significant, not only do they recognise and demonstrate the limits of the 2003 Act in the evolving online climate, but the importance of the intention of an internet user to cause harm to a recipient or reader, which is relevant in the very specific context of s127(2), but lacking in the more serious offence of s127(1).

Where the 2003 Act fails in the modern online community is that it creates a blind spot for content that is not exclusively intended to annoy, but is not always objectively menacing or grossly offensive either. The improvements brought in by the new Bill are better suited to today’s online format. It encompasses both internet content posted to an audience as well as those sent to an intended recipient, and the intention behind the actions of the sender or creator of the content. It sidesteps the burden of a narrow criterion of ‘grossly offensive’, ‘indecent, obscene or menacing’ content. It instead focuses on the intention of the sender to cause harm, which covers more context where content may not be ‘grossly offensive’ or ‘menacing’ to some who may see it, but is posted with the intention to cause harm and distress to a target few people of groups; For example, targeted threats towards a ‘protected characteristic’ such as race or religion (Equality Act 2010), or sexually explicit content posted online without consent.

It also creates a duty of care for internet service providers and servers towards their users to regulate and remove content that is harmful and intended to be harmful. Under the new Bill, OFCOM can issue provisional notices of enforcement action and penalties against servers and online providers if this surveillance duty is not met (s80(4)(c)i)ii)).

Too Much And Too Little

It was stated by Nicklin J in Hayden v Dickenson [2020] EWHC 3291 (QB) [44]: “Where Article 10 is engaged, the Court's assessment of whether the conduct crosses the boundary from the unattractive, even unreasonable, to oppressive and unacceptable must pay due regard to the importance of freedom of expression and the need for any restrictions upon the right to be necessary, proportionate and established convincingly.”

To use the words of Nicklin J, albeit in this context, those in the ‘too much’ camp fear that such behaviours that are merely ‘unattractive’ or ‘unreasonable’ will be caught in the ambit of the new Bill, and the powers of the Bill will be stretched too far to cover what are otherwise minor annoyances. It is unrealistic to assume that OFCOM will be able to realistically and sensibly prune through all the relevant content, as well as all breaches of the duty of care owed by internet providers and servers under the new Bill. It is argued that posts that are merely ‘unattractive’ or ‘unreasonable’, and therefore not genuinely threatening or harm-based, will be caught in the snare of the new powers, compromising Article 10 rights. There exist fears that intent to cause harm will be read into even the most trivial of online posts, and these reports may warp the scope of the Bill into an ‘interpreted intention’ by those offended, rather than an actual intention of the sender or content creator.

Those in the ‘too little’ camp interpret this as evidence that we need to clamp down harder. Given how huge the internet is, there is too much ground for the new Act to cover, and increased internet surveillance, and increased punishments of service providers and social network giants for failing their duty of care are needed. For example, Labour says that the bill needs to go further, and extend to criminal liability for senior managers, in addition to the duty of care of service providers, both as increased online protection, and to act as deterrent for any leniency by these providers, so that they “sit up and take notice”. These were the words of shadow culture secretary Lucy Powell.

There is also no guidance on the surveillance of a course of conduct, such as posts by the same people or similar affiliated groups, but on numerous different online platforms and message boards under different accounts. It would be near impossible to keep track of them all despite even the best coordinated efforts of OFCOM reporting. In practice, given how massive the internet is, a sizeable portion of duty of care breaches, and therefore harmful content, would slip through the cracks. There are calls for stricter identity measures, for example passport verification, but the issues with this are obvious. Such requirements provide fertile ground for hackers, and new harms arising from data breaches. The campaigners form the ‘too much’ camp point to this as a massive concern.

Furthermore, internet anonymity is a double-edged sword. Anonymity can provide safety online (for example, numerous types of confessions, safe spaces for victims of domestic violence, discrimination offences, and abuse), which those in the ‘too much’ school of thought would highlight. However, online anonymity can of course be used by those intending to cause harm whilst covering their tracks (either intentionally, such as threats, harassment of select people or groups of people, or recklessly, by spreading disinformation which causes harm by its subsequent belief and use). This follows from arguments that more needs to be done to verify online identities.

 Just Right

In lieu of the comments of Warby J in discussing the mischief aimed at by Parliament in the 2003 legislation, the legislation following from the assent of the Online Safety Bill into the Online Safety Act will have to be interpreted by the courts in the spirit of intention of the Act when it was enacted. The ultimate purpose is to protect against non-trivial online abuse, harm, and the protection of vulnerable parties and children online. It is not in the spirit of the legislation to needlessly incriminate online debates and petty squabbles in comment sections, or good faith debates that do not resort to tortious or criminal behaviours, harassment or harm beyond unintended annoyance.

This will be for the court and the common law to qualify and assess, and such scope will be impossible to fully include in a single piece of legislation, no matter how hard the new Bill attempts to do so. Ideally, this interplay between the courts and the legislation will be the ‘just right’ approach that we are looking for. It is hard to tell just how effective the scope of the proposed provisions will be, given the unclear and ever-changing scope of what separates mere internet trolling from actual harm and grievances that fall under the ambit of the new Bill, and we hope that case law will shed light on how the provisions stack up next to the fundamental freedom of expression (Art 10 ECHR). Until then, we will have to wait and see.

Wa’el Alanizi is a leading criminal, jurisprudence, and Bar Course tutor.

Previous
Previous

Fake News and UK Regulation

Next
Next

Stack v Dowden and Jones v Kernott Failed to Settle The Law.