by Nicole Magney
Over the past several years, tech and social media companies have struggled to provide a comprehensive response to the problem of terrorism-related content posted online. In early December, Twitter, Facebook, Microsoft, and YouTube announced a new initiative to share databases of online “hashes” in an effort to curb the spread of extremist material online. As defined in a statement released by Twitter, hashes are digital fingerprints for “violent terrorist imagery or terrorist recruitment videos or images” that have been posted on social media sites. This new measure will increase the efficiency of removing similar terrorist content across multiple social media networks. However, the announcement raises the questions of why it took so long for companies to agree to this and whether sharing these databases will actually be successful in reducing terrorist content online. Therefore, tech and social media companies should continue to build on this new shared data initiative, rather than assume that this initiative alone will be enough to curb the future spread of online extremist material.
Particularly since the rise of the Islamic State’s online presence, social media and tech companies have grappled with striking a balance between removing offensive and violent content and protecting users’ freedom of expression. While companies were right to question the legitimacy of limiting users’ online rights in murky situations, they also struggled to respond to seemingly clear-cut cases. In early 2015, Facebook and YouTube instituted policies that allowed users to flag and report terrorist-related content to site administrators for removal. However, particularly egregious cases where content clearly violated terms of use—for example, an Islamic State YouTube video published three days before the attack in Sousse, Tunisia in June 2015, which showed three grisly mass executions—continued to proliferate.
The new initiative seeks to fill some of the gaps in social media and tech companies’ policies toward online extremism. The initiative is the brainchild of the Counter Extremism Project (CEP), formed and led by Frances Townsend who advised President George W. Bush on Homeland Security. CEP has long advocated that social media and tech companies improve their approach to identifying and removing terrorism-related content. The organization encourages these companies to proactively search for terrorism-related content, rather than simply remove content reported by other users…
No comments:
Post a Comment