Facebook blames COVID-19 for reduced action on suicide, self-injury, and child exploitation content

Sending content reviewers home forced Facebook to rely more on AI

More human moderation needed

Facebookdid report some improvements in its AI moderation efforts. The company said the proactive detection rate for hate speech on Facebook had increased from 89% to 95%. This led it to take action on 22.5 million pieces of violating content, up from the 9.6 million in the previous quarter.

Instagram’s hate speech detection rate climbed even further, from 45% to 84%, while actioned content rose from 808,900 to 3.3 million.

Rosen said the results show the importance of  human moderators:

In other Facebook news, the companytoday announcednew measures to stop publishers backed by political organizations from running ads disguised as news. Under the new policy, news Pages with these affiliations will be banned from Facebook News. They’ll also lose access to news messaging on the Messenger Business Platform or the WhatsApp business API.

With the US election season approaching, it’s gonna be a busy few months for Facebook’s content moderation team.

So you like our media brand Neural? You shouldjoin our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.

Story byThomas Macaulay

Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with