The UK government’s Online Safety Bill means that social media companies will be required to remove harmful and inappropriate content quickly or face fines from Ofcom.
The published draft legislation, previously known as the Online Harms Bill, will tackle an array of harmful content online, ranging from child sexual abuse to suicide-related posts, hate speech, and fraud.
The Online Safety Bill, which was introduced as part of the Queen’s Speech during the state opening of Parliament, holds social media sites accountable for harmful user-generated content. Companies will now have a duty of care to take robust action against it, in a bid to remove and limit the spread of illegal and harmful content.
Under the legislation, these companies will have a duty of care to take “robust” action against illegal abuse on their platforms, which includes hate crimes, harassment, and threats. It will also be a requirement to report child sexual exploitation and abuse (CSEA) content identified on their platforms to the relevant law enforcement agencies.
The Bill has a particular focus on protecting children and keeping them safe online and covers a vast range of content to which children might fall victim to – including grooming, revenge porn, hate speech, images of child abuse, and posts that relate to suicide and eating disorders.
But it also goes much further than that, also covering terrorism, racial abuse, misinformation, and pornography.
Ofcom will be responsible for the regulation of social media companies and will have the power to fine those that fail to meet the duty of care up to £18 million, or 10% of the annual global turnover, whichever is higher. There will also be a new criminal offence for senior management, which will initially be deferred.
The Bill has been two years in the making, but with the huge rise in online abuse and fraud throughout the pandemic, its release comes at an ideal time.