Online Harms Bill will protect kids from cesspit of social media – but the danger is it also harms free speech

THE Government proudly announced its revised Online Safety Bill yesterday, in terms that seemed designed to satisfy everybody.
As the headline of the Department for Digital, Culture, Media and Sport (DCMS) statement declared: “New Protections for Children and Free Speech Added to Internet Laws.” Who could argue with that?
Well, we can certainly offer two cheers for the amended Bill.
The changes proposed offer some important steps in the right direction.
But as ever with such complicated legal matters, the devil is in the detail.
Culture Secretary Michelle Donelan said yesterday that unregulated social media has “damaged our children for too long”.
Read More on Social Media
The Bill’s renewed emphasis on child protection will be widely welcomed.
Polling shows that 83 per cent of people think Big Tech should have a duty to protect children who use platforms such as Facebook, Instagram and YouTube.
These platforms have mini-mum age limits for opening accounts — usually 13 — yet often seem incapable of checking a child’s age.
The revised Online Safety Bill will oblige companies to tell parents how they enforce minimum age limits, and to live up to their promises.
Most read in The Sun
If they fail to comply with the new law, Donelan warns: “They will be hauled in by Ofcom (official communications regulator) and it could lead to severe punishments.”
Indeed. The Bill would allow Ofcom to impose fines of up to ten per cent of a company’s annual worldwide revenue.
For Meta, owner of Facebook and Instagram, that could mean almost £10billion.
The proposed new offence of encouraging self-harm online should also prove popular.
It comes after the October inquest into the suicide of 14-year-old Molly Russell ruled that the “negative effects of online content” which romanticises self-harm had played a part in her death.
Nobody wants to defend online lowlifes encouraging our children to hurt themselves.
FREE SPEECH
On free speech, the most important change is that the Bill no longer seeks to punish social media companies for failing to remove content deemed “legal but harmful”.
The Government has also removed the section which would make it a crime to publish anything likely to cause “emotional distress”.
These welcome changes follow a big reaction against the original plans from some media outlets and free speech campaigners.
It is one thing to want to clean the social media cesspool of filth encouraging terrorism, racist violence or child abuse.
It was quite another to try to use the blunt instrument of the law to make companies censor posts that are legal, just because some might find them offensive or upsetting.
Of course, Labour’s Shadow Culture and Media Secretary, Lucy Powell, condemned these sensible amendments.
It never seems to occur to the control freaks of the Labour Party that the British public — or “users and consumers” — might have a vested interest in defending our society’s bedrock principle of free speech.
So far, so good enough. Yet there are also some red flags ahead in sorting out the devilish details — not least where the Government’s two stated concerns, child protection and free speech, meet.
For instance, the pressure to censor “legal but harmful” content could still apply if it might be viewed by under-18s.
Yet if the social media companies can’t verify ages, how are they to know where to draw the line?
'CENSOR FIRST'
With the fines of Ofcom hanging over their heads, it is not hard to imagine the ban-happy fact-checkers and algorithms deployed by risk-averse Big Tech billionaires taking the safe option to “censor first, ask questions later”.
And what if a responsible newspaper such as The Sun wants to publish a news story in the interest of teenagers as well as their parents — such as, say, the truth about another pop star sex scandal?
Will it risk being banned from social media for fear that under-18s might see it?
It’s a similarly concerning story with the “user-empowerment” proposals in the Bill.
Instead of obliging platforms to remove all such material, these will allow individual users to opt out of viewing content which the content moderators (aka censors) judge to be “legal but harmful”.
However, the unanswered question remains: How are platforms to define what is “legal but harmful” without infringing our freedom of speech?
The Bill also says, in terms as clear as legislative mud, that providers must remove content that they are “reasonably likely to infer” is illegal. Again, define “reasonably”?
No doubt the Government faces an unenviable task in sorting out such complex and controversial legislation.
But it could start by insisting that social media platforms find a reliable way of verifying users’ ages, so we can try to draw a clear line between adults and children online.
We can all agree that child protection is a pressing concern.
But history also shows that, if we’re not careful, worries about the safety of children can be exploited by those who wish to curb the freedom of adults.
And that, left to their own devices, Big Tech — whose own “safety” from punishment is their main concern — will opt for automated censorship as the easier option.
During his Tory leadership campaign, Prime Minister Rishi Sunak insisted that, while the original Online Safety Bill was absolutely right on child protection, “The challenge is whether it strays into the territory that suppresses free speech”.
Read More on The Sun
Now is the time for his Government to rise to that challenge.
The revised, improved Bill should not become another excuse for treating grown-ups like kiddies who need to be told what we should say, see, hear or think.