The UK government is taking a hard line when it comes to online safety, moving to establish what it says is the world’s first independent regulator to keep social media companies in check.

Companies that fail to live up to requirements will face huge fines, and senior directors who are proven to have been negligent will be held personally liable. They may also find access to their sites blocked.

The new measures, designed to make the internet a safer place, were announced jointly by the Home Office and Department of Culture, Media and Sport. The introduction of the regulator is the central recommendation of a highly anticipated government white paper, titled Online Harms, published early Monday in the UK.

The regulator will be tasked with ensuring social media companies are tackling a range of online problems, including:

  • Inciting violence and spreading violent content (including terrorist content)
  • Encouraging self-harm or suicide
  • The spread of disinformation and fake news
  • Cyberbullying
  • Children accessing inappropriate material
  • Child exploitation and abuse content

As well as applying to the major social networks, such as Facebook, YouTube and Twitter, the requirements will also have to be met by file-hosting sites, online forums, messaging services and search engines.

“For too long these companies have not done enough to protect users, especially children and young people, from harmful content,” said Prime Minister Theresa May in a statement. “We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe.”

Google and Facebook didn’t immediately respond to a request for comment.


Now playing:
Watch this:

Facebook is putting women on the front line of its war…



4:06

The UK government is trying to decide whether to appoint an existing regulator to the job or to create a brand-new regulator purely for this purpose. Initially the position will be funded by the tech industry, and the government is debating a levy for social media companies.

“The era of self-regulation for online companies is over,” said Digital Secretary Jeremy Wright in a statement. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”

The global move toward regulation

The measures announced by the UK on Monday are part of a larger global move toward greater regulation for big tech. The efforts originated in Europe, but have been gaining traction in the US, as well as with the leaders of tech companies, including Mark Zuckerberg and Tim Cook.

At a time of great political upheaval in the UK, the government is deciding to stand up to Silicon Valley tech companies, while hoping they’ll continue to create local jobs once the country has departed from the EU. There are also still some elements of the new regulatory process that are up for debate.

Damian Collins, chair of Parliament’s Digital, Culture, Media and Sport Committee, which recently published a report into fake news, branding social media companies “digital gangsters,” said that it’s important that the regulator has the power to launch investigations when necessary.

“The Regulator cannot rely on self-reporting by the companies,” he said. “In a case like that of the Christchurch terrorist attack for example, a regulator should have the power to investigate how content of that atrocity was shared and why more was not done to stop it sooner.”

Vinous Ali, head of policy for industry body TechUK, welcomed the publication of the white paper, but said in a statement that some elements of the government’s approach remained “too vague” and that it will have to be clear about exactly what it wants the regulator to achieve. The “duty of care” that the government believes social media companies have toward users is not clearly defined and open to broad interpretation, she added.

The Internet Association, which represents a whole list of the world’s biggest tech companies, including Facebook, Google and Twitter, said it’s important that any proposals are practical for platforms to implement regardless of their size.

A spokeswoman for Twitter said in a statement that the company is committed to prioritizing the safety of users, pointing to over 70 changes the platform made last year.

“We will continue to engage in the discussion between industry and the UK Government,” she said, “as well as work to strike an appropriate balance between keeping users safe and preserving the internet’s open, free nature.”



Source