Blackmail Risk Posed By Online Safety Bill Says Expert

The Government’s Online Safety Bill could leave adult web surfers open to blackmail, an expert has warned. Matthew Lesh, head of public policy at the Institute for Economic Affairs, said that features designed to protect people might backfire.

He told GB News: “[There are] so many core privacy threats, the very notion that in order to access the majority of the pornographic material, you have to identify yourself to the site – that is risking creating a massive honeypot, a huge risk of a database of these people’s adult viewing habits added to their identity.”

He made the comments in an interview with Tom Harwood on GB News this morning (17 March).

Lesh added that the legislation may prompt users to try to circumvent the new rules by using a Virtual Private Network (VPN) and could cause mass blocking of websites for all users.

He said: “It’s going to encourage people, of course, to use a VPN as a trick to try to get around it. It could lead to blocking on scale, overseas websites that choose not to comply with the UK’s quite authoritarian rules. We don’t exactly know how this is going to come out in practice and the Government’s been very vague about this. In order to use basically any site, any platform, but let’s say Google, that Google wants to show you things that might not necessarily be appropriate for someone under the age of 18, Google Search might need to make you log in and verify your identity as well so they know what content is appropriate to show you…”

Lesh said the Government seems to be making the same mistakes made in the past.

“This has been done before, the Government tried to put in place the porn laws a few years ago, they ended up cancelling them and reversing the legislation.”

His comments come as tech bosses face the threat of prosecution and up to two years in jail if they hamper investigations by the communications watchdog from next year, under a wide-ranging overhaul of a landmark online safety bill.

The government has reduced a grace period for criminal prosecution of senior managers by 22 months from two years to just two months, meaning tech bosses could be charged with offences from early next year.

The change was announced as the government publishes a revamped online safety bill, which places a duty of care on social media platforms and search engines to protect users from harmful content. The new measures include:

New criminal offences in England and Wales covering cyberflashing, taking part in digital “pile-ons” and sending threatening social media posts. Big platforms must tackle specific categories of legal but harmful content, which could include racist abuse and posts linked to eating disorders. Sites hosting pornography must carry out age checks on people trying to access their content.

The updated legislation introduced to parliament on Thursday confirms, and brings forward, UK-wide proposals for a fine or jail for senior managers who fail to ensure “accurate and timely” responses to information requests from regulator Ofcom.

It introduces a further two new criminal offences that apply to companies and employees: tampering with information requested by Ofcom; and obstructing or delaying raids, audits and inspections by the watchdog. A third new criminal offence will apply to employees who provide false information at interviews with the watchdog.

Nadine Dorries, the culture secretary, said tech firms have not been held to account when abuse and criminal behaviour have “run riot” on their platforms. Referring to the algorithms that tailor what users see on social media platforms which have been heavily criticised during scrutiny of the draft bill, she added: “Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.”

The legislation’s duty of care applies to internet companies which host user-generated content such as Twitter, Facebook and TikTok and search engines such as Google.

Advertisement

It is split into several categories which include: limiting the spread of illegal content such as terrorist material, child sexual abuse images and hate crime; protecting children from harmful content; and for the biggest platforms, protecting adults from legal but harmful content which is likely to include racist abuse and content linked to eating disorders.

The priority categories of legal but harmful content, which tech firms will be required to police, will be set out in secondary legislation. The government argues that this means the definition of harmful content will not be delegated to tech executives. Nonetheless, civil liberties groups are concerned that this will give ministers the power to censor content. On Wednesday the Open Rights Group called the bill an “Orwellian censorship machine”.

Companies that breach the act face fines levied by Ofcom of up to 10% of global turnover, which in the case of Facebook’s parent would be nearly $12bn (£9.2bn) or £18m, whichever is higher. The watchdog will also have the power to block sites and apps under the bill, which is expected to become law at the end of the year.

Other changes in the bill include giving users on the biggest social media sites the option of blocking anonymous accounts, in a move designed to counter online trolls. Large tech firms will be expected to provide “risk assessments” to Ofcom in which they will detail how their platforms could cause harm to users, including the workings of algorithms and the systems they have in place to prevent those harms.