Greta Barkle and Guevara Leacock of the Data Protection team at BCL Solicitors analyse the long-awaited Online safety Bill
Eagerly awaited, the draft Online Safety Bill (‘Bill’), has finally been published, delivering on the government’s manifesto commitment to make the UK “the safest place in the world to be online”. The Bill has its genesis in the Online Harms White Paper, published over two years ago in response to widespread concern at the malign underbelly of the internet. But following passionate lobbying by stakeholders, is the result a Bill which has tried so hard to please all interested parties that it ends up satisfying no-one?
Elusive duty of care
The cornerstone of the Bill is a new ‘duty of care’ placed on service providers to protect individuals from ‘harm’. It will apply to providers both based in the UK and – nebulously – those having ‘links’ here. In the government’s sights is the gamut of illegal and legal online content, from child sexual exploitation material and terrorist activity to cyber-bullying and trolling.
The ‘duty of care’ will apply to search engines and providers of internet services which allow individuals to upload and share user-generated content. In practical terms, this net will catch social media giants such as Facebook as well as less high-profile platforms such as public discussion forums.
As regards illegal content, the duty will force all in-scope companies to take proportionate steps to reduce and manage the risk of harm to individuals using their platforms. High risk ‘category 1’ providers – the big tech titans with large user-bases or which offer wide content-sharing functionality – will have the additional burden of tackling content that, though lawful, is deemed harmful such as the likes of encouragement of self-harm and misinformation.
Adding a further level of complexity, the regulatory framework will apply to both public communication channels and services where users expect a greater degree of privacy, for example online instant messaging services and closed social media groups.
Quite how service providers will be expected to meet these onerous new obligations is not specified in the Bill and, instead, they must wait for full Codes of Practice to be issued.
Rabbits from the hat
Sensitive to public pressure, the government has built on early iterations of its proposals to include new measures addressing concerns raised during the consultation process over freedom of expression, democratic debate, and online scams.
The initial release of the Online Harms White Paper triggered a furore over the potential threat to freedom of speech, with campaigners fearing the proposals would have a chilling effect on public discourse as service providers self-censored rather than face swingeing regulatory penalties for breaches in relation to ill-defined harms. In response to such concerns, service providers will be expected to have regard for the importance of protecting users’ rights to freedom of expression when deciding on and implementing their safety policies and procedures.
Concern has been building for some time about the influence which the largest social media companies potentially wield over political debate and the electoral process. This was seen most starkly in the US during the recent Presidential election where some platforms may have felt like a political football in their own right. While there are only distant echoes of that here, the role which social media plays in UK democratic events has attracted attention and, in a nod to this the government has proposed a new duty on category 1 providers to protect “content of democratic importance”. In what might euphemistically be described as opaque, such content is defined as “content that is, or appears to be, specifically intended to contribute to democratic political debate in the United Kingdom…” Service providers affected might well be left scratching their heads about quite how they will be supposed to interpret and satisfy this obligation, and it is to hoped that the eventual Codes of Practice will provide some much needed clarity. Absent such guidance, the risk is that they will be pilloried by all sides.
Following a vocal campaign from consumer groups, industry bodies and Parliamentarians, the government appears to have capitulated to pressure to include measures bringing online scams within the scope of the Bill. E-commerce fraud is estimated to be up 179% over the last decade, with romance scams alone resulting in UK losses of £60 million in 2019/20; all service providers will be required to take measures against these illegal online scourges. Commentators have noted, though, that frauds committed via online advertising, cloned website and emails will remain outside the Bill’s ambit, leaving many investors still vulnerable to the lure of sham investment frauds.”
A fierce watchdog?
This ground-breaking regulatory regime will be enforced by a ‘beefed-up’ Office of Communications (‘Ofcom’) which will wield an arsenal of new powers including fines and, in the last resort, business disruption measures. Penalties of up to £18 million or 10% of annual global turnover (whichever is the greater) will be at the regulator’s disposal. Those calling for senior management liability will, however, be disappointed; the Bill will not impose criminal liability on named senior managers of in-scope services, though the Secretary of State has reserved the power to introduce such liability in the future.
It remains to be seen how the juxtaposition between online safety and freedom of expression and democracy will play out. Service providers and Ofcom alike will no doubt have their plates full trying to decipher just how to moderate lawful but harmful online content whilst also ensuring users’ freedom of expression and democracy is not adversely affected.
Greta Barkle, associate & Guevara Leacock, legal assistant
BCL Solicitors LLP