Leaked Facebook Documents Reveal Content Removal Policies

Leaked Facebook documents reveal the monumental task faced by a social network that is constantly bombarded with inappropriate and offensive content.

According to the leaked documents, which were obtained by The Guardian, Facebook reviews more than 6.5 million fake accounts per week. Add that to the hate speech, child exploitation, revenge porn and violent images and videos, Facebook moderators are overworked and often have “just 10 seconds” to make a call on individual posts.

Facebook currently has more than 1.94 billion users, making it hard for the firm to “keep control of its content,” a source told The Guardian. “It has grown too big, too quickly.”

Some of the instructions given to moderators seem strange, even contradictory, as Facebook walks the fine line between trying to please both free speech advocates and those who view the company as a publisher with a responsibility to protect its members from inappropriate content.

Some of the guidelines, as laid out by The Guardian, include:

  • The necessity to delete remarks that include a threat to Donald Trump, because as President  of the United States, he falls into a protected category. Non-specific threats such as “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” is allowed because it is not seen as credible.
  • Slapping all violent death videos with a ‘disturbing’ label. Such clips do not necessarily have to be deleted, however, if they raise awareness of important issues such as mental illness.
  • Permitting images of abuse and bullying of children as long as they are not sexual in nature and do not have  “a sadistic or celebratory element.”
  • Permitting abortion videos as long as there is no nudity.
  • Permitting users to livestream self-harm because Facebook does not “want to censor or punish people in distress.”

As one of the leaked documents explains: “Violent language is most often not credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective language such as ‘I’m going to kill you’ or ‘Fuck off and die’ is not credible and is a violent expression of dislike and frustration.”

Facebook head of global policy management Monika Bickert, in a post published by The Guardian, explained the social network’s policies.

We aim to remove any credible threat of violence, and we respect local laws. We don’t always share the details of our policies, because we don’t want to encourage people to find workarounds – but we do publish our Community Standards, which set out what is and isn’t allowed on Facebook, and why.

Our standards change over time as our community grows and social issues around the world evolve. We are in constant dialogue with experts and local organizations, on everything from child safety to terrorism to human rights.

Sometimes this means our policies can seem counter-intuitive. As the Guardian reported, experts in self-harm advised us that it can be better to leave live videos of self-harm running so that people can be alerted to help, but to take them down afterwards to prevent copycats.

The Guardian’s report comes two weeks after Facebook announced its plans to hire 3,000 new people to police posted content. The new hires will be added to the social network’s community operations team across the globe, taking the size of the department from 4,500 to 7,500.

Whether the additional staff will help Facebook stay on top of the offensive content problem — at least for now — is anyone’s guess. But one thing is certain, the problem will continue to grow as Facebook’s membership expands.

 


avatar

Jennifer Cowan is the Managing Editor for SiteProNews.

The post Leaked Facebook Documents Reveal Content Removal Policies appeared first on SiteProNews.

SiteProNews

, , , , , ,

Comments are closed.

© 1992-2017 DC2NET™, Inc. All Rights Reserved