Attitudes, online hate speech and hidden censorship relations

Day-to-day situations that require Internet usage pose an inherent risk. The three main search engines –Google, Bing, and Yahoo! Search–, collect and store data to facilitate and retrieve information, thus optimizing the speed and performance of the searches you perform. Search engines process copious amounts of information through various techniques, but oftentimes these searches show a byproduct known as ‘online hate speech’.

According to the European Commission’s code of conduct on countering illegal hate speech online, Facebook, Microsoft, Twitter, and YouTube, together with other platforms and social media companies, share a collective responsibility in promoting and facilitating freedom of expression throughout the online world. They also commit to tackling the issue of illegal hate speech online, which is regulated by the ‘Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law’. Companies have advanced their own definition of online hate speech and have developed specific procedures to combat it. For example, Google (main subsidiary of Alphabet, Inc.) in its user content and conduct policy states ‘Our products are platforms for free expression. We do not support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, etc.

This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line’. Also, Microsoft announced last August 2016 its decision to provide new resources to report online hate speech cases in an official blog, expressing that ‘When hate speech is reported to us, we will evaluate each complaint, consider context and other factors, and determine appropriate action with respect to the content and the user’s account’.

Connecting to the Internet puts you at risk of coming into contact with the offensive material. Strategies have been implemented to identify behaviours and attitudes that lead to online hate speech. Routine Activity Theory (also known as RAT) establishes how victims’ behaviours can expose them to dangerous people, places, and events. Furthermore, Social Learning Theory can be useful in understanding this topic and observing how the environment can affect behaviour. Costelo, Hawdon, Rathiff, and Grantham (2016) state in their article, ‘Who views online extremism? Individual attributes leading to exposure’, that accordingly to an online survey which collected data from 1,034 youth and young adults in America, the majority of them were exposed to negative materials online. They based their research on both routine activity theory and social learning theory. For the latter, they argue that the likelihood of viewing extremist messages is higher for those who distrust the government since this situation frequently leads them to online sites where hate speech messages are in abundance.

Companies have designed various strategies to prevent people from being exposed to online hate speech. In September 2016, Instagram’s CEO and co-founder, Kevin Systrom, wrote in Instagram’s official blog that they were introducing a keyboard moderation tool that flags words considered offensive. The app will automatically censor comments that contain such words. Conversely, YouTube relies on two different methods of reporting videos for content violations: community members who flag videos with inappropriate content, and also a reporting tool that submits a more detailed report.

digital_censorship__svitalskybros

The problem arises when hidden indirect censorship of the Internet not labeled as censorship takes place. Online hate speech reporting systems should not be used by companies, users or governments in order to censor legitimate content. Indirect censorship on the Internet occurs when mechanisms for controlling user-submitted content are not open, transparent, or accountable. By Luis Alcaraz

0 632