Reddit often gets a bad rap in the media for being a cesspool of offensive language and breeding grounds for extreme, harmful ideas. This is due in part to the company’s refusal to mediate or ban hate speech.
In fact, Reddit CEO Steve Huffman recently stated that it’s not possible for the company to moderate hate speech. Huffman noted that since hate speech can be “difficult to define,” enforcing a ban would be “a nearly impossible precedent to uphold.”
As lazy as that may sound, anyone who has operated massive online groups (as we do) knows this to be unfortunate but true.
Currently, Reddit policy prohibits “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people […or] that glorifies or encourages the abuse of animals.”
Just about anything else is fair game. Sure, subreddit forums have been shut down in the past, but typically as the result of public pressure. Back in 2015, several subreddits were removed, including ones focused on mocking overweight people, transgender folks, and people of color.
However, other equally offensive subreddits didn’t get the axe. Reddit’s logic was that the company received complaints that the now retired subreddits were harassing others on and offline. Offensive posts are permitted, actual harassment is not.
Huffman previously stated, “On Reddit, the way in which we think about speech is to separate behavior from beliefs.” So posting something horribly racist won’t get flagged unless there’s evidence that users crossed the line from free speech to harassing behavior.
Drawing the line between harassment and controversial conversation is where things get tricky for moderators.
Other social media sites like Facebook, Instagram, and Twitter at least make an attempt, though. So what’s holding Reddit back?
Well, for one, moderating hate speech isn’t a clear cut task.
Right now, AI can’t fully take the reins because to truly put a stop to hate speech, there must be an understanding of both language and intent.
Since current AI isn’t quite there yet, Facebook currently employs actual people for the daunting task. The company mostly relies on overseas contractors, which can get pretty expensive (and can lack understanding of cultural contexts).
Users post millions of comments to Reddit per day, and paying real humans to sift through every potentially offensive or harassing post could break the bank.
Most agree that cost isn’t a relevant excuse, though, so Facebook is looking into buying and developing software specializing in natural language processing as an alternative solution. But right now, Reddit does not seem likely to follow in Facebook’s footsteps.
While Facebook sees itself as a place where users should feel safe and comfortable, Reddit’s stance is that all views are welcome, even potentially offensive and hateful ones.
This April in an AMA (Ask Me Anything) a user straight up asked if obvious racism and slurs are against Reddit’s rules.
Huffman responded in part, “the best defense against racism and other repugnant views both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.”
So essentially, although racism is “not welcome,” it’s also not likely to be banned unless there is associated unacceptable behavior as well.
It’s worth noting that while Reddit as a whole does not remove most hate speech, each subreddit has its own set of rules that may dictate stricter rules. The site essentially operates as an online democracy, with each subreddit “state” afforded the autonomy to enforce differing standards.
Enforcement comes down to moderators, and although some content is clearly hateful, other posts can fall into grey area.
Researches at Berkeley partnered with the Anti-Defamation League recently partnered up to create The Online Hate Index project, an AI program that identifies hate speech. While the program was surprisingly accurate in identifying hate speech, determining intensity of statements was difficult.
Plus, many of the same words are used in hate and non-hate comments. AI and human moderators struggle with defining what crosses the line into hate speech. Not all harmful posts are immediately obvious, and when a forum receives a constant influx of submissions, the volume can be overwhelming for moderators.
While it’s still worth making any effort to foster healthy online communities, until we get a boost to AI’s language processing abilities, complete hate speech moderation may not be possible for large online groups.
Lindsay is an editor for The American Genius with a Communication Studies degree and English minor from Southwestern University. Lindsay is interested in social interactions across and through various media, particularly television, and will gladly hyper-analyze cartoons and comics with anyone, cats included.
