Intellectual Property, Information Technology & Cybersecurity

The Online Safety Bill UK: What You Need to Know

It has been six years since Professor Lorna Woods and William Perrin penned their framework for what would become the Online Safety Bill on the back of a Pret a Manger sandwich wrapper.

Since then, and its introduction as a white paper in 2019, the framework has undergone numerous rewrites and has garnered notoriety as one of the more controversial pieces of recent legislation, causing heated debates and multiple amendments. Despite these difficulties, it was announced on 19 September that the bill has passed through the House of Lords without further amendment and will be receiving royal assent.

At face value, and as the government has been reminding the public, the bill’s overarching principles are to protect UK citizens and, more importantly, their children from the more dangerous aspects of the internet.

While the ‘legal but harmful’ content that originally fell under the scope of the bill was removed last year after fierce debate, the legislation still grants the government extraordinarily wide powers to investigate and curb illegal content on the internet. This will include material that promotes child exploitation, illegal immigration and human trafficking, animal cruelty, and terrorism.

Content that promotes self-harm or suicide, especially that aimed at children and teenagers, has also come under the bill’s powers, following a long and well-publicised campaign by parents and survivors who have demanded that more is done to protect the most vulnerable in our society.

Websites will need to create stronger age verification procedures, with social media companies liable for checking users are not underage. Businesses will also have to increase scrutiny of their advertisers, with fines being meted to businesses for allowing any fraudulent advertising on their websites.

Social media companies have been quick to highlight that this may be placing a considerable burden on businesses since any oversight, no matter how understandable, in a climate in which websites often have hundreds of advertisers, sometimes contracted through third parties, can be met with a punitive fine or criminal prosecution.

The bill sets out a three-pronged protection plan: ensuring any illegal content is removed, placing a legal responsibility on companies to enforce the terms of the bill, and allowing users to be able to filter content they do not wish to see online.

The legislation will still make a distinction between category one companies, which are seen as higher-risk entities and will thus receive the most scrutiny, and category two companies, which will be monitored and controlled, but to a lesser extent. However, in both categories, the obligation to police and filter content will lie solely with the companies.

Ofcom has been confirmed as the regulating body, with the ability to issue fines of up to £18m or 10% of the offending company’s annual revenue. Companies and, in more serious cases, the individual executives, will be able to be held criminally liable. In the most extreme instances, Ofcom will have the power to ban the offending platform from operating within the United Kingdom. This has led many to speculate that the true scope of the bill will only be realised in the courts through precedence. This is one of the many concerns that have been raised.

The harsh penalties, coupled with the wide scope and the uncertainty of how such legislation will work in practice, should spur companies to begin preparing now.

To find out more, please read the full article here 

If you need any advice on this issue, please do not hesitate to get in touch with Callum Sinclair (callum.sinclair@burnesspaull.com)

< Back