IDare Act

Alternative Narrative

Censorship: cases about hate speech and content removal on Facebook

Facebook, an American for-profit company, was founded in February 2004 by four Harvard students with the name of “thefacebook.com” which changed into the actual “facebook.com” in the summer of 2005. A few years later, the social media network site had become one of the biggest websites in the world. Facebook is commonly used by journalists, activists, and people all over the world to share histories, facts, and news. All posts were written or shared by users must meet the “Community Standards” criteria in order to stay online. The written Standards can be found at https://facebook.com/communitystandards/.

They are divided into four main categories (helping to keep you safe, encouraging respectful behaviour, keeping your account and personal information secure and protecting your intellectual property) which, at the same time, are divided into different sections with similar targets. For example, how the social media helps people who feel threatened by others on the platform or how the site handles reports of criminal activity, both within the “helping to keep you safe” category.

Censorship is commonly defined as the institution, system or practice of reading communication and deleting material considered sensitive or harmful. It is also important that Facebook defines self-censorship as “the act of preventing oneself from speaking”1. It is worth noting that legal topics related to data processing are covered by the Facebook Data Use Policy. The paper entitled “The post that wasn’t: Facebook monitors everything users type and not publish” by Evelyne J.B. Sørensen which highlights that the policy does not explicitly state if Facebook manages “self-censorship” behaviour. However, this article aims to cover cases of hate speech and content removal.


From the “napalm girl” removal, Facebook is involved in an eruption of censorship accusations. The nine-year-old girl, named Kim Phúc is running away from the removed picture taken by Huynh Cong Út (aka Nick Ut) which put an end to the Vietnam war. Facebook removed the picture and defended its actions claiming “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others”. After removing the picture, a big movement against such a decision took place and Facebook decided to delete multiple uploads of the iconic image by Nick Ut and banned user accounts. Such decisions under the umbrella of censorship open up the following questions: should Facebook be the arbiter of what we watch and read? Is Facebook under threat by Governments?


To distinguish between a formal ban for the picture whose decision is fully taken by Facebook staff, or instead, partially or totally influenced by Governments is also a very challenging matter of interest. After such accusations, Sheryl Sanders, Chief Operating Officer, admitted the error and said: “These are difficult decisions and we don’t always get it right”. Nevertheless, this was not the first time such a case occurred. According to CNET magazine, Facebook teams in Texas, Ireland, India and headquarters in California process millions of reports a week. These are the teams responsible for taking down posts, images, videos and any other uploaded material (respecting the rules laid down in the Data Use Policy) which do not conform to the policy of Community Standards.

In September 2016, according to Al-Jazeera, four editors from Shehab News Agency and three executives from the Quds News Network, could not access their personal Facebook accounts. Both agencies were covering daily news in the occupied Palestinian territories and that is why Nisreen al-Khatib, a journalist at the Quds News Network, said that they were probably banned due to an agreement between Facebook and the Government. In addition to this, it was said by al-Khatib that “Maybe they don’t want this covered, especially in the West Bank, where executions have happened in recent days. Maybe that affects them on social media and they want to stop these pages to hide the proof”. After they face censorship, a Facebook spokesperson apologized claiming that “Our team processes millions of reports each week, and we sometimes get things wrong. We’re very sorry about this mistake”.


Taking the question from an older post, do we have a limit on our “freedom of expression” or not? Today, most companies act to counter online hate speech. For example, Facebook tries to remove hate speech quickly. According to Marvin Levine, Facebook’s ex-vice president of global public policy, sometimes there are instances of offensive content that does not hate speech and Facebook teams “work to apply fair, thoughtful, and scalable policies” (the Community Standards). But the problem arises when legitimate content is censored. By Luis Alcaraz

  1. Das, S. & Kramer, A. (2013). Self-Censorship on Facebook. Retrieved from: https://www.aaai.org/ocs/index.php/ICWSM/ICWSM13/paper/viewFile/6093/6350