Connect with us

Social Media

Reddit CEO says it’s impossible to police hate speech, and he’s 100% right

(SOCIAL MEDIA) Moderating speech online is a slippery slope, and Reddit’s CEO argues that it’s impossible. Here’s why censorship of hate speech is still so complicated.

Published

on

covid-19 work remotely

Reddit often gets a bad rap in the media for being a cesspool of offensive language and breeding grounds for extreme, harmful ideas. This is due in part to the company’s refusal to mediate or ban hate speech.

In fact, Reddit CEO Steve Huffman recently stated that it’s not possible for the company to moderate hate speech. Huffman noted that since hate speech can be “difficult to define,” enforcing a ban would be “a nearly impossible precedent to uphold.”

As lazy as that may sound, anyone who has operated massive online groups (as we do) knows this to be unfortunate but true.

Currently, Reddit policy prohibits “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people […or] that glorifies or encourages the abuse of animals.”

Just about anything else is fair game. Sure, subreddit forums have been shut down in the past, but typically as the result of public pressure. Back in 2015, several subreddits were removed, including ones focused on mocking overweight people, transgender folks, and people of color.

However, other equally offensive subreddits didn’t get the axe. Reddit’s logic was that the company received complaints that the now retired subreddits were harassing others on and offline. Offensive posts are permitted, actual harassment is not.

Huffman previously stated, “On Reddit, the way in which we think about speech is to separate behavior from beliefs.” So posting something horribly racist won’t get flagged unless there’s evidence that users crossed the line from free speech to harassing behavior.

Drawing the line between harassment and controversial conversation is where things get tricky for moderators.

Other social media sites like Facebook, Instagram, and Twitter at least make an attempt, though. So what’s holding Reddit back?

Well, for one, moderating hate speech isn’t a clear cut task.

Right now, AI can’t fully take the reins because to truly put a stop to hate speech, there must be an understanding of both language and intent.

Since current AI isn’t quite there yet, Facebook currently employs actual people for the daunting task. The company mostly relies on overseas contractors, which can get pretty expensive (and can lack understanding of cultural contexts).

Users post millions of comments to Reddit per day, and paying real humans to sift through every potentially offensive or harassing post could break the bank.

Most agree that cost isn’t a relevant excuse, though, so Facebook is looking into buying and developing software specializing in natural language processing as an alternative solution. But right now, Reddit does not seem likely to follow in Facebook’s footsteps.

While Facebook sees itself as a place where users should feel safe and comfortable, Reddit’s stance is that all views are welcome, even potentially offensive and hateful ones.

This April in an AMA (Ask Me Anything) a user straight up asked if obvious racism and slurs are against Reddit’s rules.

Huffman responded in part, “the best defense against racism and other repugnant views both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.”

So essentially, although racism is “not welcome,” it’s also not likely to be banned unless there is associated unacceptable behavior as well.

It’s worth noting that while Reddit as a whole does not remove most hate speech, each subreddit has its own set of rules that may dictate stricter rules. The site essentially operates as an online democracy, with each subreddit “state” afforded the autonomy to enforce differing standards.

Enforcement comes down to moderators, and although some content is clearly hateful, other posts can fall into grey area.

Researches at Berkeley partnered with the Anti-Defamation League recently partnered up to create The Online Hate Index project, an AI program that identifies hate speech. While the program was surprisingly accurate in identifying hate speech, determining intensity of statements was difficult.

Plus, many of the same words are used in hate and non-hate comments. AI and human moderators struggle with defining what crosses the line into hate speech. Not all harmful posts are immediately obvious, and when a forum receives a constant influx of submissions, the volume can be overwhelming for moderators.

While it’s still worth making any effort to foster healthy online communities, until we get a boost to AI’s language processing abilities, complete hate speech moderation may not be possible for large online groups.

Lindsay is an editor for The American Genius with a Communication Studies degree and English minor from Southwestern University. Lindsay is interested in social interactions across and through various media, particularly television, and will gladly hyper-analyze cartoons and comics with anyone, cats included.

Social Media

New Pinterest code of conduct pushes for mindful posting

(SOCIAL MEDIA) Social media sites have struggled with harmful content, but Pinterest is using their new code of conduct to encourage better, not just reprimands.

Published

on

Pinterest icon on phone with 2 notifications, indicating new code of conduct.

It appears that at least one social media site has made a decision on how to move forward with the basis of their platform. Pinterest has created a brand-new code of conduct for their users. Giving them a set of rules to follow which to some may be a little restricting, but I’m not mad about it. In a public statement, they told the world their message:

“We’re on a journey to build a globally inclusive platform where Pinners around the world can discover ideas that feel personalized, relevant, and reflective of who they are.”

The revamp of their system includes 3 separate changes revolving around the rules of the platform. All of them are complete with examples and full sets of rules. The list is summed up as:

  • Pinterest Creator Code
  • Pinterest Comment Moderation Tools
  • Pinterest Creator Fund

For the Creator Code, Pinterest had this to say: “The Creator Code is a mandatory set of guidelines that lives within our product intended to educate and build community around making inclusive and compassionate content”. The rules are as follows:

  • Be Kind
  • Check my Facts
  • Be aware of triggers
  • Practice Inclusion
  • Do no harm

The list of rules provides some details on the pop-up as well, with notes like “make sure content doesn’t insult,” “make sure information is accurate,” etc. The main goal of this ‘agreement’, according to Pinterest, is not to reprimand offending people but to practice a proactive and empowering social environment. Other social websites have been shoe-horned into reprimanding instead of being proactive against abuse, and it has been met with mixed results. Facebook itself is getting a great deal of flack about their new algorithm that picks out individual words and bans people for progressively longer periods without any form of context.

Comment Moderation is a new set of tools that Pinterest is hoping will encourage a more positive experience between users and content creators. It’s just like putting the carrot before the donkey to get him to move the cart.

  • Positivity Reminders
  • Moderation Tools
  • Featured Comments
  • New Spam Prevention Signals

Sticking to the positivity considerations here seems to be the goal. They seem to be focusing on reminding people to be good and encouraging them to stay that way. Again, proactive, not reactive.

The social platform’s last change is to create a Pinterest Creator Fund. Their aim is to provide training, create strategy consulting, and financial support. Pinterest has also stated that they are going to be aiming these funds specifically at underrepresented communities. They even claim to be committing themselves to a quota of 50% of their Creators. While I find this commendable, it also comes off a little heavy handed. I would personally wait to see how they go about this. If they are ignoring good and decent Creators based purely on them being in a represented group, then I would find this a bad use of their time. However, if they are actively going out and looking for underrepresented Creators while still bringing in good Creators that are in represented groups, then I’m all for this.

Being the change you want to see in the world is something I personally feel we should all strive towards. Whether or not you produced positive change depends on your own goals… so on and so forth. In my own opinion, Pinterest and their new code of conduct is creating a better positive experience here and striving to remind people to be better than they were with each post. It’s a bold move and ultimately could be a spectacular outcome. Only time will tell how their creators and users will respond. Best of luck to them.

Continue Reading

Social Media

Facebook releases Hotline as yet another Clubhouse competitor

(SOCIAL MEDIA) As yet another app emerges to try and take some of Clubhouse’s success, Facebook Hotline adds a slightly more formal video chat component to the game.

Published

on

Woman forming hands into heart shape at laptop hosting live video chat, similar to Facebook's new app Hotline

Facebook is at it again and launching its own version of another app. This time, the company has launched Hotline, which looks like a cross between Instagram Live and Clubhouse.

Facebook’s Hotline is the company’s attempt at competing with Clubhouse, the audio-based social media app, which was released on iOS in March 2020. Earlier this year, The New York Times reported Facebook had already begun working on building its own version of the app. Erik Hazzard, who joined Facebook in 2017 after the company acquired his tbh app, is leading the project.

The app was created by the New Product Experimentation (NPE) Team, Facebook’s experimental development division, and it’s already in beta testing online. To access it, you can use the web-based application through the platform’s website to join the waitlist and “Host a Show”. However, you will need to sign in using your Twitter account to do so.

Unlike Clubhouse, Hotline lets users also chat through video and not just audio alone. The product is more like a formal Q&A and recording platform. Its features allow people to live stream and hold Q&A sessions with their audiences similar to Instagram Live. And, audience members can ask questions by using text or audio.

Also, what makes Hotline a little more formal than Clubhouse is that it automatically records conversations. According to TechCrunch, hosts receive both a video and audio recording of the event. With a guaranteed recording feature, the Q&A sessions will stray away from the casual vibes of Clubhouse.

The first person to host a Q&A live stream on Hotline is real-estate investor Nick Huber, who is the type of “expert” Facebook is hoping to attract to its platform.

“With Hotline, we’re hoping to understand how interactive, live multimedia Q&As can help people learn from experts in areas like professional skills, just as it helps those experts build their businesses,” a Facebook spokesperson told TechCrunch. “New Product Experimentation has been testing multimedia products like CatchUp, Venue, Collab, and BARS, and we’re encouraged to see the formats continue to help people connect and build community,” the spokesperson added.

According to a Reuters article, the app doesn’t have any audience size limits, hosts can remove questions they don’t want to answer, and Facebook is moderating inappropriate content during its early days.

An app for mobile devices isn’t available yet, but if you want to check it out, you can visit Hotline’s website.

Continue Reading

Social Media

Brace yourselves: Facebook has re-opened political advertising space

(SOCIAL MEDIA) After a break due to misinformation in the past election, Facebook is once again allowing political advertising slots on their platform – with some caveats.

Published

on

Facebook open on phone in a wallet case, open for political advertising again.

After a months-long ban on political ads due to misinformation and other inappropriate behavior following the election in November, Facebook is planning to resume providing space for political advertising.

Starting on Thursday, March 4th, advertisers were able to buy spots for ads that comprise politics, what Facebook categorizes as “social issues”, and other potentially charged topics previously prohibited by the social media platform.

The history of the ban is complicated, and its existence was predicated on a profound distrust between political parties and mainstream news. In the wake of the 2016 election and illicit advertising activity that muddied the proverbial waters, Facebook had what some would view as a clear moral obligation to prevent similar sediment from clouding future elections.

Facebook delivered on that obligation by removing political advertising from their platform prior to Election Day, a decision that would stand fast in the tumultuous months to follow. And, while Facebook did temporarily suspend the ban in Georgia during the senate proceedings, political advertisements nevertheless remained absent from the platform in large until last week.

The removal of the ban does have some accompanying caveats—namely the identification process. Unlike before, advertisers will have to go to great lengths to confirm their identities prior to launching ads. Those ads will most likely also need to come from domestic agencies given Facebook’s diligent removal of foreign and malicious campaigns in the prior years.

The moral debate regarding social media advertising—particularly on Facebook—is a deeply nuanced and divided one. Some argue that, by removing political advertising across the board, Facebook has simply limited access for “good actors” and cleared the way for illegitimate claims.

Facebook’s response to this is simply that they didn’t understand fully the role ads would play in the electoral process, and that allowing those ads back will allow them to learn more going forward.

Either way, political advertising spots are now open on Facebook, and the overall public perception seems controversial enough to warrant keeping an eye on the progression of this decision. It wouldn’t be entirely unexpected for Facebook to revoke access to these advertisements again—or limit further their range and scope—in the coming months and years.

Continue Reading

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!