Facebook CEO Mark Zuckerberg wants to hire thousands of content reviewers to pore over flagged content this year, a move he reiterated during his hearings last week on Capitol Hill.
“By the end of this year, we’re going to have more than 20,000 people at the company working on security and content review, because this is important,” explained Zuckerberg during the hearings, adding that when content gets flagged by Facebook (FB), content reviewers will examine that content and take it down if it violates the social network’s Community Standards.
Facebook is already well on its way to meeting Zuckerberg’s ambitious goal, having hired 15,000 people already — 7,500 of which are content reviewers, the social network told Yahoo Finance. Those content reviewers are tasked with reviewing content: text, photos and videos flagged by Facebook users for being pornographic, racist or violent. The work can be fast and furious: a Wall Street Journal report from December indicated that content reviewers may view up to 8,000 posts a day.
According to a Facebook spokesperson, content reviewers are a mix of full-time employees, and workers supplied by recruiting firms such as Accenture, Pro Unlimited and Arvato.
Accenture, Pro Unlimited and Arvato did not respond for comment immediately.
With each hire, Facebook seeks native language speakers with “market knowledge” — an understanding of local and regional issues, such as who the political figures are in a given area. While flagging nudity and pornography may not require it, context is key to identifying say, instances of hate speech that tie into local issues in a certain country or region.
Facebook does not disclose specifically how much they pay content reviewers, however, the social network told Yahoo Finance they are paid “above average” for the industry. One content reviewer who left in October 2016, reportedly earned $24 an hour.
Being a content reviewer can be a grueling job, given their days are spent poring over photos, videos and text that can be extremely violent, graphic or prurient.
To combat that, Facebook now offers training and support which includes regular access to psychologists and therapists.
Over time, Zuckerberg added the social network plans on shifting to a method where more of the content on its site is flagged by artificial intelligence tools developed inside the social network. Already, Facebook’s AI tools are used to flag 99% of ISIS and Al Qaeda content before any Facebook user sees it.
Some problems lend themselves more easily to AI solutions than others, Zuckerberg said during the Congressional hearings, pointing to hate speech as one of the most challenging, given how “linguistically nuanced” it can be. However, he expressed optimism that some time within the next five to 10 years, the social network will have AI tools that can successfully navigate those nuances.