The new rules could have significant ramifications for tech companies, potentially meaning they have to alter their platforms in order to comply.
Big Tech has been under heavy scrutiny of late after a video of last month’s attack on two mosques in New Zealand was shared repeatedly on a number of sites.
In the U.K. specifically, social networks have come under political pressure following the death of teen Molly Russell, who committed suicide in 2017 after viewing distressing material about self-harm and suicide on Instagram.
The Facebook-owned photo-sharing app subsequently said it would ban all graphic self-harm images.
Facebook says it’s already made a number of changes aimed at removing harmful content and that it’s being more transparent about how it enforces its policies on such content.
“New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech,” Rebecca Stimson, Facebook’s head of U.K. public policy, said in a statement Monday.
“These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”
Twitter, meanwhile, says it’s been an “active participant” in discussions between the tech industry and the government on online safety.
“We are already deeply committed to prioritizing the safety of our users, as evidenced by the introduction of over 70 changes to our policies and processes last year to improve the health and safety of the public conversation online,” Katy Minshall, head of public policy for Twitter U.K., said in a statement.
“We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet.”