With violent extremists using the internet to promote their propaganda, spread hate, and even livestream attacks and executions, the pressure is on tech companies to become more proactive in preventing terrorism by banning and removing terrorist content from their platforms.
In August of this past year, the president and the FBI both called upon tech companies to step up their game. The next month, Facebook expanded their definition of terrorism and banned 200 white supremacist groups from the platform.
In December, Facebook posted an update on its blog about its counterterrorism efforts, written by Facebook VP for Global Policy Management, Monika Bickert, and its Head of Counterterrorism and Dangerous Organizations Policy for Europe, the Middle East and Africa, Dr. Erin Saltman. The existence of such job titles points to tech company’s efforts to counter terrorism by dedicating teams to the cause.
The post mostly describes the efforts of the Global Internet Forum to Counter Terrorism (GIFCT), an organization started in the summer of 2017, chaired by Facebook, and on its way to becoming an independent 501c3 organization. GIFCT members also include Microsoft, Twitter, YouTube, Pinterest, Drobox, Amazon, LinkedIn, and WhatsApp.
The organization seeks facilitate collaboration not only between the tech companies, but also with governments, the public, other NGOs like Tech Against Terrorism, academic institutions, and researchers. GIFCT’s mission statement is to “prevent terrorists and extremists from exploiting digital platforms.” This, according to Facebook’s blog post, includes thwarting terrorists’ “abilities to promote themselves, share propaganda and exploit digital platforms to glorify real-world acts of violence.”
GIFCT’s nine point plan includes actions that each member company is expected to take individually, as well as ways in which companies will work together, and will work with other entities. Each company has committed to developing terms of use that explicitly prohibit terrorist content so that there is a “clear basis” for removing it, easy-to-use tools for users to flag content, improved detection and removal technologies, and vetting and moderation systems for livestreams. Furthermore, companies must issue transparency reports to disclose incidents of the detection and removal of terrorist content.
Member companies have also committed to sharing knowledge, tools, and research for improving technologies that help flag and remove terrorist content, including a shared dataset for developing artificial intelligence. For example, platforms use “hashes,” which are like digital footprints, to quickly remove terrorist content en masse.
Companies will also work together to maintain Crisis Protocols for responding quickly to “emerging or active” terrorist attacks (Facebook has activated its protocol 35 times since creating it); to educate the public; and to combat bigotry by working with organizations that “challenge hate and promote pluralism and respect online.”
For every advance that tech companies make, terrorists are also coming up with innovative new strategies to bypass obstacles. Furthermore, it can be challenging to agree on definitions across different countries and groups, especially when extremist organizations have political affiliations. Tech companies are doing their best to prevent the trauma of terrorism and violent extremism from spreading online, but there is still much work to be done.
Ellen Vessels, a Staff Writer at The American Genius, is respected for their wide range of work, with a focus on generational marketing and business trends. Ellen is also a performance artist when not writing, and has a passion for sustainability, social justice, and the arts.

Pingback: Counter Terrorism in the tech space is a growing concern for Facebook - Journal of Cyber Policy
Pingback: COVID-19: How social media platforms are fighting misinformation