Twitter is testing out a new warning system for potentially offensive tweets. If a tweet contains language Twitter deems “harmful,” Twitter will pop up with a warning and opportunity to revise the potentially offensive tweet before posting. The warning mentions that language in the tweet is similar to previously reported tweets.
If internal alarms are going off in your head, congratulations, you are wary of any censorship! However, if you read a tweet spewing with bile, racism, or threatening violence against a person or institution, do you report it? Do you want Twitter to take it down? If you said yes, then congratulations, you want to protect the vulnerable and fight hatred.
If you are wary of censorship, yet want to fight hatred and protect the vulnerable, welcome to the interwebs! It’s a crazy and precarious place where almost anything can happen. Despite decades of use, we’re still navigating our way through the gauntlet of tough decisions the proliferation of platforms and ease of use have given us.
First, how does Twitter gauge a potentially harmful tweet? According to Twitter, the app responds to language similar to prior tweets that people have reported. Twitter, like Facebook, Instagram, and other social platforms, already has hateful conduct rules in place. In fact, Twitter has a host of rules and policies intended to protect users from fraud, graphic violence, or explicitly sexual images.
Their rationale is detailed, but explains, “Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” However, they “recognise that if people experience abuse on Twitter, it can jeopardize their ability to express themselves.”
We’ve heard stories of teenagers–or even younger children–killing themselves after relentless bullying online. The feeling of anonymity when insulting a living, breathing being from behind a computer screen often causes a nasty pile-on effect. We’ve seen people use social media to bully, sexually harass, and threaten others.
Twitter cites research showing women, people of color, LGBTQIA+ individuals, and other vulnerable populations are more likely to stop expressing themselves freely when someone abuses them on social media. Even Kelly Marie Tran, who played Resistance fighter Rose Tico in Star Wars, took down her Instagram photos before taking a stand against haters. And she had Jedis in her corner. Imagine your average person’s response to such cruel tactics?
We’ve seen hate groups and terrorist organizations use social media to recruit supporters and plan evil acts. We see false information springing up like weeds. Sometimes this information can be dangerous, especially when Joe Blow is out there sharing unresearched and inaccurate medical advice. Go to sleep, Blow, you’re drunk.
As an English major, and an open-minded person, I have a problem with censorship. Banned books are some of my favorites of all time. However, Twitter is a privately owned platform. Twitter has no obligation to amplify messages of hate. They feel, and I personally agree, that they have some responsibility to keep hateful words inciting violence off of their platform. This is a warning, not a ban, and one they’re only rolling out to iOS users for now.
I mean, in the history of angry rants, when was the last time a “Hey, calm down, you shouldn’t say that” ever made the person less angry or less ranty? Almost never. In which case, the person will make their post anyway, leaving it up to masses to report it. At that time, Twitter can make the decision to suspend the account and tell the user to delete it, add a warning, or otherwise take action.
Every once in a while, though, someone may appreciate the note. If you’ve ever had a colleague read an email for “tone” in a thorny work situation, you know heeding a yellow flag is often the wisest decision. This warning notice gives users a chance to edit themselves. As a writer, I always appreciate a chance to edit myself. If they flag every damn curse word, though, that will get real annoying real fast. You’re not my mom, Twitter. You’re not the boss of me.
This isn’t your great granddaddies’ book burning. This is 2020. The internet giveth; the internet taketh away. It’s a crying shame that evil creeps in when we’re not looking. Speech has consequences. Users can’t edit tweets, so once it’s out there, it’s out there. Even if they delete a tweet within moments of posting, anyone can screenshot that baby and share it with the world. Part of me says, “Good, let the haters out themselves.”
Twitter has shown itself to be open to differences in opinion, encouraging freedom of expression, and has opened up a whole new line of communication for traditionally underrepresented populations. They are a private company, and their rules and policies are posted. What, you didn’t read the terms of use? Gasp!
It’s Twitter’s rodeo, after all. This warning gives users a quick, added heads up to posting something that will likely be reported/removed anyway. For better or worse, Twitter’s still leaving it up to users to post what they want and deal with the potential fallout. Hey, I have a great idea! How about we all be respectful of each other on the internet, and Twitter won’t have to come up with this kind of thing.
Joleen Jernigan is an ever-curious writer, grammar nerd, and social media strategist with a background in training, education, and educational publishing. A native Texan, Joleen has traveled extensively, worked in six countries, and holds an MA in Teaching English as a Second Language. She lives in Austin and constantly seeks out the best the city has to offer.
Pingback: Twitter to relaunch verification system with stricter rules
Pingback: Twitter to introduce voice recording feature, be ready for even hotter takes
Pingback: This Twitter tool hopes to fight misinformation, but how effective is it?