Facebook going off script
Facebook recently announced its opposition to use of its platform by terrorist organizations, and specified several strategies intended to restrict terrorist use of social media, in what is widely being reported as various versions of “Facebook commits to using artificial intelligence to fight terrorism.”
Let’s talk for a minute about what that actually means.
Intelligence as numbers
“Artificial intelligence to fight terrorism” doesn’t mean we get Skynet saving us from suicide bombers. That would be rad, up to the point Skynet tags us all as suicide bombers and sends the drones after us. Skynet is not, as yet, very bright.
Mine keeps showing me ads for Dodge Rams, and I’m a writer in a studio apartment. I have no use for a truck. I barely need legs.
At the moment all “artificial intelligence” consistently means is “self-improving algorithm,” or as a marketing term, “set of self-improving algorithms with a voice interface and a price tag.”
Facebook “using artificial intelligence to fight terrorism” means “Facebook deploying image and text recognition algorithms that will correlate content with known terrorist media and distribute discipline accordingly.”
That’s a necessarily complicated problem
First, it requires Facebook to identify terrorists and Facebook is not, I am reliably informed, the FBI. Facebook’s algorithms are their attempt, or the beginning of their attempt, at corporate accountability for the use of their platform by terrorist groups.
Corporate accountability does not, however, equal competence. Where the line exists between the systematic use of social media, particularly Facebook, as propaganda engines for Islamic State and other terrorist organizations – and let’s be clear, that’s absolutely happening – and the impact on profit margins of employing political scientists, law enforcement experts and tech geniuses Facebook would need to seriously engage with 21st century terrorism, which isn’t exactly their core business, is rather an open question.
Islamic state Facebook
In many ways, however, the big question is also a really small one: algorithms? As the article linked above demonstrates, Islamic State in particular is very good at Facebook.
Thus far, Islamic State has been better at Facebook than Facebook and the governments of the civilized world, working together, have been at stopping Islamic State from using Facebook.
The AI solutions Facebook is deploying may solve that, or not.
If they don’t, that’s status quo. On to the next solution.
What if they do?
What if some brilliant programmers get the process on lock and AI becomes a working solution for keeping Islamic State and similar scum off your product?
That would be an extraordinarily easy fix, in many ways a Godsend.
It would also raise massive legal, social and moral questions, because in effect it would grant authority to moderate human speech to a nonhuman agency. Facebook is, scary but true, one of the most active, vital settings in the history of human communication. That forum alone leaving content moderation to our robot overlords is a concern.
But if AI counterterrorism works on Facebook, the FBI – and Interpol, and GCTF, and the rest of the alphabet soup with assault rifles who address terrorism on a day to day basis – can be confidently expected to follow.
That represents a new level of oversight on everyday communication, one that, on the most basic level, lacks not only third-party oversight, but human oversight of any kind.
Facebook taking responsibility for what’s done on its platform is praiseworthy, but it’s also a fascinating, potentially frightening look at what may be our digital future.