Constantly scratched by civil society over the problem of hate speech and other racist and insulting content, Facebook a recently focused its content moderation efforts on these topics, and welcomed this Thursday, November 19, its progress in automatic detection.
The social media giant has added a clue to its battery of statistics: the frequency of content that violates its rules.
Over 22 million content removed from Facebook this summer
From July to September, the prevalence of hate content was between 0.1 and 0.11%
, said Guy Rosen, the group’s vice president in charge of integrity, at a press conference. In other words, out of 10,000 pieces of content viewed on Facebook, on average, between 10 and 11 contained hate speech.
. It’s like an air quality test to determine the concentration of pollution
, he added.
This summer, Facebook removed more than 22 million hateful content and Instagram, 6.5 million, double the previous quarter.
For years, the company has mastered the automatic removal of certain content, linked to pornography or terrorism for example.
The spread of hate is more complicated to manage, with machines having difficulty distinguishing between news, humor, parodies, rumors and insults.
Advances in artificial intelligence
The leaders were therefore delighted with the progress made in artificial intelligence (AI). According to Guy Rosen, in 2017, the automatic detection rate for hate speech – withdrawn before any report by a user – was 23.6%. Today it is 95%
, he said.
However, the social network does not plan in the short or even long term to reduce its teams of moderators. They are made up of 35,000 people responsible for security in general, including 15,000, mainly employed by subcontractors, focused on moderation of content.
Advances in AI allow us to faster
, to systematize human decisions to avoid repeating themselves, and to devote more time and energy on more complicated subjects
, argued Mike Schroepfer, chief technology officer of the Californian group.
These announcements come when moderators express their dissatisfaction with the group.
On Wednesday, more than 200 Facebook moderators signed an open letter criticizing the company for them. force
to return to the office despite the pandemic, but also the hasty recourse, according to them, to AI.
Important messages were crushed by Facebook’s filter and dangerous content, like self-inflicted injuries, passed
, they accuse.
Facebook reacted by assuring that sanitary conditions of employees returning to the office were optimal.
– .