Facebook, Twitter, YouTube praised for “steady progress” quashing illegal hate speech in Europe

Facebook, Twitter and YouTube are likely to be breathing a little easier in Europe after getting a pat on the back from regional lawmakers for making “steady progress” on removing illegal hate speech.

Last week the European Commission warned it could still draw up legislation to try to ensure illegal content is removed from online platforms if tech firms do not step up their efforts.

Germany has already done so for, implementing a regime of fines of up to €50M for social media firms that fail to promptly remove illegal hate speech, though the EC is generally eyeing a wider mix of illegal content when it talks tough on this topic — including terrorist propaganda and even copyrighted material.

Today, on the specific issue of illegal hate speech on social media, it was sounding happy with the current voluntary approach. It also announced that two more social media platforms — Instagram and Google+ — have joined the program.

In 2016 Facebook, Twitter, YouTube and Microsoft signed up to a regional Code of Conduct on illegal hate speech, committing to review the majority of reported hate speech within 24 hours and — for valid reports — remove posts within that timeframe too.

The Commission has been monitoring their progress on social media hate speech, specifically to see whether they are living up to what they agreed in the Code of Conduct.

Today it gave the findings from its third review — reporting that the companies are removing 70 per cent of notified illegal hate speech on average, up from 59 per cent in the second evaluation, and 28 per cent when their performance was first assessed in 2016.

Last year, Facebook and YouTube announced big boosts to the number of staff dealing with safety and content moderation issues on their platforms, following a series of content scandals and a cranking up of political pressure (which, despite the Commission giving a good report now, has not let up in every EU Member State).

Also under fire over hate speech on its platform last year, Twitter broadened its policies around hateful conduct and abusive behavior — enforcing the more expansive policies from December.

Asked during a press conference whether the EC would now be less likely to propose hate speech legislation for social media platforms, Justice, Consumers and Gender Equality commissioner Věra Jourová replied in the affirmative.

“Yes,” she said. “Now I see this as more probable that we will propose — also to the ministers of justice and all the stakeholders and within the Commission — that we want to continue this [voluntary] approach.”

Though the commissioner also emphasized she was not talking about other types of censured online content, such as terrorist propaganda and fake news. (On the latter, for instance, France’s president said last month he will introduce an anti-fake news election law aimed at combating malicious disinformation campaigns.)

“With the wider aspects of platforms… we are looking at coming forward with more specific steps which could be taken to tighten up the response to all types of illegal content before the Commission reaches a decision on whether legislation will be required,” Jourová added.

She noted that some Member States’ justice ministers are open to a new EU-level law on social media and hate speech — in the event they judge the voluntary approach to have failed — but said other ministers take a ‘hands off’ view on the issue.

“Having these quite positive results of this third assessment I will be stronger in promoting my view that we should continue the way of doing this through the Code of Conduct,” she added.

While she said she was pleased with progress made by the tech firms, Jourová flagged up feedback as an area that still needs work.

“I want to congratulate the four companies for fulfilling their main commitments. On the other hand I urge them to keep improving their feedback to users on how they handle illegal content,” she said, calling again for “more transparency” on that.

“My main idea was to make these platforms more responsible,” she added of the Code. “The experience with the big Internet players was that they were very aware of their powers but did not necessarily grasp their responsibilities.

“The Code of Conduct is a tool to enforce the existing law in Europe against racism and xenophobia. In their everyday business, companies, citizens, everyone has to make sure they respect the law — they do not need a court order to do so.

“Let me make one thing very clear, the time of fast moving, disturbing companies such as Google, Facebook or Amazon growing without any supervision or control comes to an end.”

In all, for the EC’s monitoring exercise, 2,982 notifications of illegal hate speech were submitted to the tech firms in 27 EU Member during a six-week period in November and December last year, split between reporting channels that are available to general users and specific channels available only to trusted flaggers/reporters.

In 81.7% of the cases the exercise found that the social media firms assessed notifications in less than 24 hours; in 10% in less than 48 hours; in 4.8% in less than a week; and in 3.5% it took more than a week.

Performance varied across the companies with Facebook achieving the best results — assessing the notifications in less than 24 hours in 89.3% of the cases and 9.7% in less
than 48 hours — followed by Twitter (80.2% and 10.4% respectively), and lastly YouTube (62.7% and 10.6%).

Twitter was found to have made the biggest improvement on notification review, having only achieved 39% of cases reviewed within a day as of May 2017.

In terms of removals, Facebook removed 79.8% of the content, YouTube 75% and Twitter 45.7%. Facebook also received the largest amount of notifications (1 408), followed by Twitter (794) and YouTube (780). Microsoft did not receive any.

According to the EC’s assessment, the most frequently reported grounds for hate speech are ethnic origin, anti-Muslim hatred and xenophobia.

Acknowledging the challenges that are inherent in judging whether something constitutes illegal hate speech or not, Jourová said the Commission does not have a target of 100% removals on illegal hate speech reports — given the “difficult work” that tech firms have to do in evaluating certain reports.

Illegal hate speech in Europe is defined as hate speech that has the potential to incite violence.

“They have to take into consideration the nature of the message and its potential impact on the behavior of the society,” she noted. “We do not have the goal of 100% because there are those edge cases. And… in case of doubt we should have the messages remain online because the basic position is that we protect the freedom of expression. That’s the baseline.”