Facebook’s global head of safety has admitted the social network needs more people to help tackle hate speech, with its detection software failing to pick up on it as effectively as other offensive content.
While insisting that the platform remained a safe place for its users, Antigone Davis told Sky News the company acknowledged that more “human involvement” was needed to help bolster its efforts to protect them from hate speech.
“When you’re talking about hate speech, it can require a good deal of context in which to understand the term that someone has used or how they’re using it,” Ms Davis said.
“I think that is an area in which we need human involvement.”
Facebook’s difficulty with hate speech is clear in the firm’s own figures.
In its recently released Transparency Report, Facebook reported that its detection software had found 99.7% of spam and 99.5% of terrorist propaganda before it was flagged by any of its two billion users.
But its algorithms had only discovered 38% of hate speech, despite many of the posts easily accessible to any user with a quick search. Sky News found pages containing the antisemitic phrase “Jewish Ritual Murder”.
And despite numerous protests from Jewish human rights groups, Facebook left a similar page up for years until it was finally removed in 2018.
But Ms Davis denied suggestions the platform was struggling to make its technology work for hate speech.
She said: “I wouldn’t say struggling. I’d say it’s not as good or not as valuable for hate speech as it is for other content.”
The company has promised to double the number of moderators it employs, from 10,000 to 20,000, although it will not say how many are in the UK.
But its struggle to define hate speech has already come under further scrutiny this week, with news that Facebook would be reviewing its policies on “white separatism” and “white nationalism” after pressure from US civil rights groups.
Leaked documents seen by the tech site Motherboard earlier this year had shown that moderators were explicitly instructed to allow the two phrases to remain on the site.
Back in the UK, the government is considering a new internet regulator, with penalties for websites that do not remove illegal hate speech within a specific time period.
A similar law is already in force in Germany, where Facebook has confirmed that it has over 400 human moderators.