As Facebook moves further toward Zuckerberg’s Metaverse, concerns about the efficiency with which the company addresses hate speech still remain, with employees recently estimating that only around 2% of offending materials are removed by Facebook’s AI screening tools.
According to Wall Street Journal, internal documents from Facebook show an alarming inability to detect hate speech, violent threats, depictions of graphic content, and other “sensitive” issues via their AI screening. This directly contradicts predictions made by the company in the past.
A “senior engineer” also admitted that, in addition to removing only around 2% of inappropriate material, the odds of that number reaching even a numerical majority is extremely unlikely: “Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.”
The reported efficacy of the AI in question would be laughable were the situation less dire. Reports ranging from AI confusing cockfights and car crashes to inaccurately identifying a car wash video as a first-person shooting are referenced in the internal documents, while far more sobering imagery–live-streamed shootings, viscerally graphic car wrecks, and open threats of violence against transgender children–went entirely unflagged.
Even the system in which the AI works is a source of doubt for employees. “When Facebook’s algorithms aren’t certain enough that content violates the rules to delete it, the platform shows that material to users less often—but the accounts that posted the material go unpunished,” reports Wall Street Journal.
AI has repeatedly been shown to struggle with bias as well. Large Language Models (LLMs)–machine-learning algorithms that inform things like search engine results and predictive text–have defaulted to racist or xenophobic rhetoric when subjected to search terms like “Muslim”, leading to ethical concerns about whether or not these tools are actually capable of resolving things like hate speech.
As a whole, Facebook employees’ doubts about the actual usefulness of AI in removing inappropriate material (and keeping underage users off of the platform) paint a grim portrait of the future of social media, especially as the Metaverse marches steadily forward in mainstream consumption.
Jack Lloyd has a BA in Creative Writing from Forest Grove's Pacific University; he spends his writing days using his degree to pursue semicolons, freelance writing and editing, oxford commas, and enough coffee to kill a bear. His infatuation with rain is matched only by his dry sense of humor.
