With violent extremists using the internet to promote their propaganda, spread hate, and even livestream attacks and executions, the pressure is on tech companies to become more proactive in preventing terrorism by banning and removing terrorist content from their platforms.
In August of this past year, the president and the FBI both called upon tech companies to step up their game. The next month, Facebook expanded their definition of terrorism and banned 200 white supremacist groups from the platform.
In December, Facebook posted an update on its blog about its counterterrorism efforts, written by Facebook VP for Global Policy Management, Monika Bickert, and its Head of Counterterrorism and Dangerous Organizations Policy for Europe, the Middle East and Africa, Dr. Erin Saltman. The existence of such job titles points to tech company’s efforts to counter terrorism by dedicating teams to the cause.
The post mostly describes the efforts of the Global Internet Forum to Counter Terrorism (GIFCT), an organization started in the summer of 2017, chaired by Facebook, and on its way to becoming an independent 501c3 organization. GIFCT members also include Microsoft, Twitter, YouTube, Pinterest, Drobox, Amazon, LinkedIn, and WhatsApp.
The organization seeks facilitate collaboration not only between the tech companies, but also with governments, the public, other NGOs like Tech Against Terrorism, academic institutions, and researchers. GIFCT’s mission statement is to “prevent terrorists and extremists from exploiting digital platforms.” This, according to Facebook’s blog post, includes thwarting terrorists’ “abilities to promote themselves, share propaganda and exploit digital platforms to glorify real-world acts of violence.”
Member companies have also committed to sharing knowledge, tools, and research for improving technologies that help flag and remove terrorist content, including a shared dataset for developing artificial intelligence. For example, platforms use “hashes,” which are like digital footprints, to quickly remove terrorist content en masse.
Companies will also work together to maintain Crisis Protocols for responding quickly to “emerging or active” terrorist attacks (Facebook has activated its protocol 35 times since creating it); to educate the public; and to combat bigotry by working with organizations that “challenge hate and promote pluralism and respect online.”
For every advance that tech companies make, terrorists are also coming up with innovative new strategies to bypass obstacles. Furthermore, it can be challenging to agree on definitions across different countries and groups, especially when extremist organizations have political affiliations. Tech companies are doing their best to prevent the trauma of terrorism and violent extremism from spreading online, but there is still much work to be done.