Ofcom will be put in charge of regulating the internet, the government has announced, with executives at internet firms potentially facing substantial fines or even prison sentences if they fail to protect users from “harmful and illegal content” online.
Under the proposals, Ofcom will not have the power to remove specific posts from social media platforms. Instead, it will require internet companies such as Facebook and Google to publish explicit statements setting out which content and behaviour they deem to be acceptable on their sites. The media regulator will then ensure internet businesses enforce these standards “consistently and transparently”.
The culture secretary, Nicky Morgan, and the home secretary, Priti Patel, promised that changes to the proposals would guarantee free speech for adults online and only target larger internet businesses. However, some tech start-up groups warned that it would still place an enormous burden on smaller businesses to police content that is potentially harmful but not illegal.
Among the proposals announced on Wednesday, it was revealed:
- Any business that enables the sharing of user-generated content – such as online comments or video uploads – is likely to be affected by the new rules on reducing online harms, with hundreds of thousands of British companies affected.
- Internet businesses will be required to publish annual transparency reports explaining what harmful content they have removed and how they are meeting their standards.
- The government wants companies to bring back age verification for certain websites, following an abandoned attempt to introduce it last year to restrict access to online pornography.
The plan to require regulation has already raised concerns among the publishers of major newspapers, with some outlets that have campaigned for the legislation as a way of curtailing the power of internet companies fearful that their own news website comment sections could now be regulated for online harms. Ministers have previously reassured traditional publishers that they will be excluded from the legislation.
The announcement was made in the government’s response to a consultation over the online harms white paper, unveiled last April. There will now be a period of lobbying, as internet companies attempt to avoid the legislation placing substantial new burdens on their businesses.
“With Ofcom at the helm of a proportionate and strong regulatory regime, we have an incredible opportunity to lead the world in building a thriving digital economy, driven by groundbreaking technology, that is trusted by and protects everyone in the UK,” said Morgan.
Patel added: “While the internet can be used to connect people and drive innovation, we know it can also be a hiding place for criminals, including paedophiles, to cause immense harm. It is incumbent on tech firms to balance issues of privacy and technological advances with child protection.”
The regulations are broadly focused on two new sets of requirements. One, around illegal content, will give platforms new targets to ensure that such content is removed quickly, ideally prevented from being posted in the first place, with a particular focus on terrorist and child sexual abuse content.
The second set of requirements, which attempt to minimise the distribution of “harmful” content, such as that which encourages or glorifies self-harm or suicide, focuses instead on putting a requirement on large online platforms to better enforce their own terms of service. The goal is that platforms which say they remove such content, and are thus safe for children to be active on, are held to that promise; while platforms that focus on free speech over safety are equally clear about objectives.
Those new regulations will apply only to companies that allow the sharing of user-generated content, the government says.