Connect with us

Social Media

Twitter will give users a warning before a harmful tweet is sent

(SOCIAL MEDIA) Twitter is rolling out a new warning giving users a chance to edit their tweet before they post “harmful” language, and we aren’t sure how to feel about it.

Published

on

twitter warning

Twitter is testing out a new warning system for potentially offensive tweets. If a tweet contains language Twitter deems “harmful,” Twitter will pop up with a warning and opportunity to revise the potentially offensive tweet before posting. The warning mentions that language in the tweet is similar to previously reported tweets.

If internal alarms are going off in your head, congratulations, you are wary of any censorship! However, if you read a tweet spewing with bile, racism, or threatening violence against a person or institution, do you report it? Do you want Twitter to take it down? If you said yes, then congratulations, you want to protect the vulnerable and fight hatred.

If you are wary of censorship, yet want to fight hatred and protect the vulnerable, welcome to the interwebs! It’s a crazy and precarious place where almost anything can happen. Despite decades of use, we’re still navigating our way through the gauntlet of tough decisions the proliferation of platforms and ease of use have given us.

First, how does Twitter gauge a potentially harmful tweet? According to Twitter, the app responds to language similar to prior tweets that people have reported. Twitter, like Facebook, Instagram, and other social platforms, already has hateful conduct rules in place. In fact, Twitter has a host of rules and policies intended to protect users from fraud, graphic violence, or explicitly sexual images.

Their rationale is detailed, but explains, “Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” However, they “recognise that if people experience abuse on Twitter, it can jeopardize their ability to express themselves.”

We’ve heard stories of teenagers–or even younger children–killing themselves after relentless bullying online. The feeling of anonymity when insulting a living, breathing being from behind a computer screen often causes a nasty pile-on effect. We’ve seen people use social media to bully, sexually harass, and threaten others.

Twitter cites research showing women, people of color, LGBTQIA+ individuals, and other vulnerable populations are more likely to stop expressing themselves freely when someone abuses them on social media. Even Kelly Marie Tran, who played Resistance fighter Rose Tico in Star Wars, took down her Instagram photos before taking a stand against haters. And she had Jedis in her corner. Imagine your average person’s response to such cruel tactics?

We’ve seen hate groups and terrorist organizations use social media to recruit supporters and plan evil acts. We see false information springing up like weeds. Sometimes this information can be dangerous, especially when Joe Blow is out there sharing unresearched and inaccurate medical advice. Go to sleep, Blow, you’re drunk.

As an English major, and an open-minded person, I have a problem with censorship. Banned books are some of my favorites of all time. However, Twitter is a privately owned platform. Twitter has no obligation to amplify messages of hate. They feel, and I personally agree, that they have some responsibility to keep hateful words inciting violence off of their platform. This is a warning, not a ban, and one they’re only rolling out to iOS users for now.

I mean, in the history of angry rants, when was the last time a “Hey, calm down, you shouldn’t say that” ever made the person less angry or less ranty? Almost never. In which case, the person will make their post anyway, leaving it up to masses to report it. At that time, Twitter can make the decision to suspend the account and tell the user to delete it, add a warning, or otherwise take action.

Every once in a while, though, someone may appreciate the note. If you’ve ever had a colleague read an email for “tone” in a thorny work situation, you know heeding a yellow flag is often the wisest decision. This warning notice gives users a chance to edit themselves. As a writer, I always appreciate a chance to edit myself. If they flag every damn curse word, though, that will get real annoying real fast. You’re not my mom, Twitter. You’re not the boss of me.

This isn’t your great granddaddies’ book burning. This is 2020. The internet giveth; the internet taketh away. It’s a crying shame that evil creeps in when we’re not looking. Speech has consequences. Users can’t edit tweets, so once it’s out there, it’s out there. Even if they delete a tweet within moments of posting, anyone can screenshot that baby and share it with the world. Part of me says, “Good, let the haters out themselves.”

Twitter has shown itself to be open to differences in opinion, encouraging freedom of expression, and has opened up a whole new line of communication for traditionally underrepresented populations. They are a private company, and their rules and policies are posted. What, you didn’t read the terms of use? Gasp!

It’s Twitter’s rodeo, after all. This warning gives users a quick, added heads up to posting something that will likely be reported/removed anyway. For better or worse, Twitter’s still leaving it up to users to post what they want and deal with the potential fallout. Hey, I have a great idea! How about we all be respectful of each other on the internet, and Twitter won’t have to come up with this kind of thing.

Social Media

Instagram announces 3 home feed options, including chronological order

(SOCIAL MEDIA) Instagram is allowing users to choose how their home feed appears so they can tailor their own experience… and chronological is back!

Published

on

Instagram home feed options

Break out the bottle of champagne, because they are bringing back the chronological order in Instagram!

About time, right? Well, that’s not all. Per Protocol, Instagram has announced that they are rolling out three feed options in the first half of 2022. What?! Yes, you read that right.

3 New Feed View Options

  1. Home: This feed view should feel familiar because it’s the algorithm you already use. No changes to this view.
  1. Favorites: This feed view option presents a nice and tidy way to view creators, friends, and family of your choosing.
  1. Following: Last, but not least, is my favorite re-boot, the chronological view of every account that you follow.

Per Protocol, recent legal allegations have been made that Instagram and Facebook have been prioritizing content viewed as harmful in the algorithm and specifically in Instagram. Instagram is widely believed to be harmful to teens. Per the American Psychological Association, “Studies have linked Instagram to depression, body image concerns, self-esteem issues, social anxiety, and other problems”.  They have been under scrutiny by lawmakers and in response are posing the chronological feed as a solution.

However, this won’t fix everything. Even if the algorithm isn’t prioritizing harmful posts, those posts will still exist and if that account is followed it can still be seen. The other issue with this solution is the knowledge that unless Instagram lets you choose your default feed view, they could still cause the algorithm view to be the automatic view. Facebook doesn’t allow you to make the chronological feed your default view. This means you would need to choose that view every time. This bit of friction means there will be times it is overlooked and some may not even know the functionality exists. Knowing this information about Facebook, prepares us for what’s to come with Instagram. After all, Facebook, or Meta, owns both.

While as an entrepreneur, the chronological view excites me, I know the reality of it being used is questionable. I would love to know others can see the products and services I offer instead of hoping that Instagram finds my content worthy to share in the algorithm.

As a human being with a moral conscience, I have to scream, “C’mon Instagram, you CAN do better!” We all deserve better than having a computer pick what’s shown to us. Hopefully, lawmakers will recognize this band-aid quick fix for what it truly is and continue with making real changes to benefit us all.

Continue Reading

Social Media

Facebook’s targeting options for advertising are changing this month

(SOCIAL MEDIA) Do you market your business on Facebook? You need to know that their targeting options for ads are changing and what to do about it.

Published

on

Laptop on lap open to Facebook page representing ad targeting.

Meta is transforming Facebook’s ad campaigns beginning January 19th. Facebook, which has been infamously battling criticism regarding election ads on their platform, is revising its limited targeting ad campaigns. Per this Facebook blog post, these changes eliminate the ability to target users based on interactions with content related to health (e.g., “Lung cancer awareness”, “World Diabetes Day”), race and ethnicity, political affiliation, religious practices (e.g., “Catholic Church” and “Jewish holidays”) and sexual orientation (e.g., “same-sex marriage” and “LGBT culture”).

These changes go into effect on January 19, 2022. Facebook will no longer allow new ads to use these targeting tools after that date. By March 17, 2022, any existing ads using those targeting tools will no longer be allowed.

The VP of Ads and Business Product Marketing at Facebook, Graham Mudd, expressed the belief that personalized ad experiences are the best, but followed up by stating:

“[W]e want to better match people’s evolving expectations of how advertisers may reach them on our platform and address feedback from civil rights experts, policymakers, and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available.”

To help soften the blow, Facebook is offering tips and examples for small businesses, non-profits, and advocacy groups to continue to reach their audiences that go beyond the broad targeting of gender and age.

These tips include creating different types of targeting such as Engagement Custom Audiences, Lookalike Audiences, Website Custom Audiences, Location Targeting, and Customer Lists from a Custom Audience.

Here’s the lowdown on how it will happen.

Per the Search Engine Journal, changes can be made to budget amounts or campaign names without impacting the targeting until March 17th. However, if you go to change the ad set level that will then cause changes at the audience level.

If you need to keep that particular ad to reuse, it may be best to edit the detailed targeting settings before March 17th in order to ensure you can make changes to it in the future.

I believe it was Heraclitus that declared change is constant. Knowing this, we can conclude other social platforms may follow suit and possibly adjust their targeting in the future as well.

Continue Reading

Social Media

Hate speech seemingly spewing on your Facebook? You’re not wrong

(SOCIAL MEDIA) Facebook (now Meta) employees estimate its AI tools only clean up 3%-5% of hate speech on the platform. Surprise, Surprise *eye roll*

Published

on

Facebook being crossed out by a stylus on a mobile device for hate speech.

As Facebook moves further toward Zuckerberg’s Metaverse, concerns about the efficiency with which the company addresses hate speech still remain, with employees recently estimating that only around 2% of offending materials are removed by Facebook’s AI screening tools.

According to Wall Street Journal, internal documents from Facebook show an alarming inability to detect hate speech, violent threats, depictions of graphic content, and other “sensitive” issues via their AI screening. This directly contradicts predictions made by the company in the past.

A “senior engineer” also admitted that, in addition to removing only around 2% of inappropriate material, the odds of that number reaching even a numerical majority is extremely unlikely: “Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.”

The reported efficacy of the AI in question would be laughable were the situation less dire. Reports ranging from AI confusing cockfights and car crashes to inaccurately identifying a car wash video as a first-person shooting are referenced in the internal documents, while far more sobering imagery–live-streamed shootings, viscerally graphic car wrecks, and open threats of violence against transgender children–went entirely unflagged.

Even the system in which the AI works is a source of doubt for employees. “When Facebook’s algorithms aren’t certain enough that content violates the rules to delete it, the platform shows that material to users less often—but the accounts that posted the material go unpunished,” reports Wall Street Journal.

AI has repeatedly been shown to struggle with bias as well. Large Language Models (LLMs)–machine-learning algorithms that inform things like search engine results and predictive text–have defaulted to racist or xenophobic rhetoric when subjected to search terms like “Muslim”, leading to ethical concerns about whether or not these tools are actually capable of resolving things like hate speech.

As a whole, Facebook employees’ doubts about the actual usefulness of AI in removing inappropriate material (and keeping underage users off of the platform) paint a grim portrait of the future of social media, especially as the Metaverse marches steadily forward in mainstream consumption.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!