Wikipedia founder Jimmy Wales announced the launch of WT:Social last week, a social network sprung from the WikiTribune project. In addition to creating the global encyclopedia that your high school teacher won’t let you cite as a source, Wales is also behind the Wikimedia Foundation and the Jimmy Wales Foundation for Freedom of Expression.
WikiTribune is a volunteer-driven platform focused on delivering “neutral, factual, high-quality news.” (There’s a lot that could be said about the ethics and logistics of trying to “fix” news by paying reporters even less/nothing, but that’s another article.)
Springing a social network out of a news site means that WT:Social’s focus is largely going to be on fixing what’s wrong with Facebook’s news. They’ve drawn criticism over the last few years for their news policies.
Among other things, despite theoretically banning white nationalist content, their list of “trusted” news sources includes Breitbart, a site whose founder has called it a platform for the alt-right. (The alt-right itself is a self-avowed white nationalist movement, among other things) Zuckerberg has also (as we’ve pointed out) claimed that politicians have the right to lie in advertisements. Refusing to hold advertisers to any sort of standard of truth is deeply concerning, to say the least.
So WT:Social is out to improve the way that people consume and share news. But is that enough to make it succeed as a social network? After all, people looking for FB or Twitter alternatives aren’t just looking for news. They’re looking for a less toxic platform.
Facebook and Twitter have both received criticism for how they handle user experience and advertisements alike. Both have problems with bubbling extremist movements, and both have struggled with public perception in the wake of persistent allegations that their moderation systems are under-resourced, and tend to side with abusive users over the marginalized people those abusers were targeting. For their part, Twitter has overtly stated that people who violate their terms of service regarding harassment or threats will not be banned for it, so long as they are sufficiently newsworthy.
This might have something to do with the fact that they see some of those same TOS violators as enough of a draw to their platform to feature them in advertisements. And of course, Twitter kept conspiracy theorist Alex Jones and his InfoWars media company on the platform despite rules violations until he confronted CEO Jack Dorsey in person.
One theoretical point in WT:Social’s favor is that they’re planning on being donation-supported, rather than ad-supported. Which is fantastic from an end-user standpoint, but raises issues on buy-in from others. And that’s not the only potential stumbling block in WT:Social’s path.
As yet WT:Social hasn’t really stated a particular interest in competing with Facebook and Twitter on the social aspects of social media, and so far, that lack of interest comes through on the site. This writer signed up for the social network (looking, as ever, for a Facebook alternative) and was greeted by a number of baffling things.
First, my attempts to log in were greeted by a notification that I was “number 65538 on the waiting list,” and that I could send invitations to get earlier access to the site, to make posts.
Then, I made posts.
But now I can’t find them?
Beyond that, I’m not sure what the waiting list is actually for. On top of the mysterious queue, there’s a place where I can subscribe! But once again, I don’t quite know what I would be subscribing to, and $12.99/month is a lot to ask for a service that’s completely undefined. I suppose that I could track down other sources to explain this to me, but if the user experience is so confounding from the outset that I need to learn about it secondhand, do I really want to pursue the site further?
A friend and I, both eager for a Facebook alternative, started writing on each other’s walls to test the service out. But in lieu of any kind of notification system, we found ourselves writing on each other’s WT:Social profiles, and then returning to Facebook to let the other person know that we had done so.
It’s not an auspicious beginning.
But at the same time, something needs to happen. With Facebook’s reputation for promulgating fake news, Twitter’s notoriety for abuse, Reddit’s haze of toxicity, and content hubs like YouTube and Tumblr cracking down on adult content (and seemingly defining the existence of LGBT people as inherently “adult,”) people are looking for some kind of life raft. The person who creates a robust social network that commits to rooting out toxicity could have quite the business opportunity on their hands.
Facebook pays $52M to content mods with PTSD, proving major flaw in their business
(SOCIAL MEDIA) Facebook will pay out up to millions to former content moderators suffering PTSD to settle the 2018 class action lawsuit.
Facebook’s traumatized former content moderators are finally receiving their settlement for the psychological damage caused by having to view extremely disturbing content to keep it off of Facebook.
The settlement is costing the company $52 million, distributed as a one time payment of $1,000 to each of the 10,000+ content moderators in four states. If any of these workers seek psychological help and are diagnosed with psychological conditions related to their jobs, Facebook also has to pay for that medical treatment. They pay up to $50,000 per moderator in additional damages (on a case-by-case basis).
Facebook also will offer psychological counseling going forward, and will attempt to create a type of screening for future candidates to determine a candidate’s emotional resiliency, and will make one-on-one mental health counseling available to content moderators going forward. They will also give moderators the ability to stop seeing specific types of reported content.
According to NPR, Steve Williams, a lawyer for the content moderators, said, “We are so pleased that Facebook worked with us to create an unprecedented program to help people performing work that was unimaginable even a few years ago. The harm that can be suffered from this work is real and severe.”
Honestly, this job is not for the faint of heart, to say the least. Like the hard-working, yet not unfazeable police officers on Law & Order SVU, seeing the worst of humanity takes a toll on one’s psyche. Facebook’s content moderators are only human, after all. These workers moderated every conceivable–and inconceivable–type of disturbing content people posted on the 2 billion-users-strong social media platform for a living. Some for $28,800 a year.
I wouldn’t last five minutes in this role. It is painful to even read about what these content moderators witnessed for eight hours a day, five days a week. While Facebook refuses to admit any wrongdoing, as part of the agreement, come on, man. Graphic and disturbing content that upset someone enough to report to Facebook is what these people viewed all day every day. It sounds almost like a blueprint for creating trauma.
This settlement surely sets the precedent for more class action lawsuits to come from traumatized content moderators on other social media platforms. The settlement also shows this business model for what it is: flawed. This isn’t sustainable. It’s disgusting to think there are people out there posting heinous acts, and I am grateful the platform removes them.
However, they have to come up with a better way. Facebook employs thousands upon thousands of really smart people who are brilliant at computer technology. Twitter and YouTube and similar platforms do, too. They need to come up with a better plan going forward, instead of traumatizing these unfortunate souls. I don’t know what that will look like. But with Facebook’s sky-high piles of money and access to so many brilliant minds, they can figure it out. Something’s got to give. Please figure it out.
Twitter will give users a warning before a harmful tweet is sent
(SOCIAL MEDIA) Twitter is rolling out a new warning giving users a chance to edit their tweet before they post “harmful” language, and we aren’t sure how to feel about it.
Twitter is testing out a new warning system for potentially offensive tweets. If a tweet contains language Twitter deems “harmful,” Twitter will pop up with a warning and opportunity to revise the potentially offensive tweet before posting. The warning mentions that language in the tweet is similar to previously reported tweets.
If internal alarms are going off in your head, congratulations, you are wary of any censorship! However, if you read a tweet spewing with bile, racism, or threatening violence against a person or institution, do you report it? Do you want Twitter to take it down? If you said yes, then congratulations, you want to protect the vulnerable and fight hatred.
If you are wary of censorship, yet want to fight hatred and protect the vulnerable, welcome to the interwebs! It’s a crazy and precarious place where almost anything can happen. Despite decades of use, we’re still navigating our way through the gauntlet of tough decisions the proliferation of platforms and ease of use have given us.
First, how does Twitter gauge a potentially harmful tweet? According to Twitter, the app responds to language similar to prior tweets that people have reported. Twitter, like Facebook, Instagram, and other social platforms, already has hateful conduct rules in place. In fact, Twitter has a host of rules and policies intended to protect users from fraud, graphic violence, or explicitly sexual images.
Their rationale is detailed, but explains, “Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” However, they “recognise that if people experience abuse on Twitter, it can jeopardize their ability to express themselves.”
We’ve heard stories of teenagers–or even younger children–killing themselves after relentless bullying online. The feeling of anonymity when insulting a living, breathing being from behind a computer screen often causes a nasty pile-on effect. We’ve seen people use social media to bully, sexually harass, and threaten others.
Twitter cites research showing women, people of color, LGBTQIA+ individuals, and other vulnerable populations are more likely to stop expressing themselves freely when someone abuses them on social media. Even Kelly Marie Tran, who played Resistance fighter Rose Tico in Star Wars, took down her Instagram photos before taking a stand against haters. And she had Jedis in her corner. Imagine your average person’s response to such cruel tactics?
We’ve seen hate groups and terrorist organizations use social media to recruit supporters and plan evil acts. We see false information springing up like weeds. Sometimes this information can be dangerous, especially when Joe Blow is out there sharing unresearched and inaccurate medical advice. Go to sleep, Blow, you’re drunk.
As an English major, and an open-minded person, I have a problem with censorship. Banned books are some of my favorites of all time. However, Twitter is a privately owned platform. Twitter has no obligation to amplify messages of hate. They feel, and I personally agree, that they have some responsibility to keep hateful words inciting violence off of their platform. This is a warning, not a ban, and one they’re only rolling out to iOS users for now.
I mean, in the history of angry rants, when was the last time a “Hey, calm down, you shouldn’t say that” ever made the person less angry or less ranty? Almost never. In which case, the person will make their post anyway, leaving it up to masses to report it. At that time, Twitter can make the decision to suspend the account and tell the user to delete it, add a warning, or otherwise take action.
Every once in a while, though, someone may appreciate the note. If you’ve ever had a colleague read an email for “tone” in a thorny work situation, you know heeding a yellow flag is often the wisest decision. This warning notice gives users a chance to edit themselves. As a writer, I always appreciate a chance to edit myself. If they flag every damn curse word, though, that will get real annoying real fast. You’re not my mom, Twitter. You’re not the boss of me.
This isn’t your great granddaddies’ book burning. This is 2020. The internet giveth; the internet taketh away. It’s a crying shame that evil creeps in when we’re not looking. Speech has consequences. Users can’t edit tweets, so once it’s out there, it’s out there. Even if they delete a tweet within moments of posting, anyone can screenshot that baby and share it with the world. Part of me says, “Good, let the haters out themselves.”
It’s Twitter’s rodeo, after all. This warning gives users a quick, added heads up to posting something that will likely be reported/removed anyway. For better or worse, Twitter’s still leaving it up to users to post what they want and deal with the potential fallout. Hey, I have a great idea! How about we all be respectful of each other on the internet, and Twitter won’t have to come up with this kind of thing.
Yelp adds virtual services classification to help during COVID
(SOCIAL MEDIA) Yelp constantly adds new classifications for how to find a business to meet your needs, now because of COVID they have added virtual services.
Yelp is making efforts to accommodate businesses whose operations are adapting in response to the coronavirus pandemic. Several new features will help businesses display updated services.
The company has added an information category titled virtual service offerings. Business can display service option such as classes, virtual consultations, performances, and tours. Yelpers can search for businesses based upon those offerings.
Yelp has already noticed trends where users are incorporating virtual services into their business profiles. In an report by TechCrunch, Yelp’s head of consumer product Akhil Kuduvalli said “With these new product updates, businesses of all types that are adapting and changing the way they operate will be able to better connect with their customers and potentially find new ones.”
Virtual services in categories like fitness, gyms, home services, real estate, and health are already increasing in popularity. Yelp intends to showcase businesses that are providing those services by creating new Collections.
“Once business owners update their virtual service offerings on their Yelp for Business profiles, we will surface those updates to consumers through new call-to-action buttons, by updating the home screen and search results with links to groups of businesses offering these new virtual services, as well as surfacing them in other formats like Collections,” said Kudvalli.
Also in the works is a curbside pickup category for restaurants. Additionally, Yelp introduced a free customized banner for businesses to post updates on their profiles. About 224,000 businesses have used the banner so far.
Yelp hasn’t stopped there. It’s made its Connect feature (which allows businesses to share important updates to all Yelpers on their profile and their email subscribers) free to eligible local businesses as part of the Yelp’s commitment to waive $25 million in fees to support businesses in need during the COVID-19 crisis.
During COVID-19 businesses and consumers need all the help they can get, and thankfully Yelp is there to – help.
Survey indicates that small businesses are optimistic despite COVID-19
Who will get to work from home once COVID-19 stay-home orders are over?
Weight Watchers lays off 4K employees on a brief Zoom call #cold
Facebook staff now remote – but move away from the Bay, and pay gets cut
Restaurant chains are using COVID to masquerade as indie food pop ups
Augmented reality start up shifts focus to handle new COVID-19 world
Lead generation company mass scrapes emails from LinkedIn
Amazon VP resigns via spicy letter calling the company chickensh*t
Bistro owner rewards 1 star reviews to beat Yelps ‘algorithm’ racket
Microsoft launches free Python coding language courses easy peasy
Anti-surveillance mask – creepy, ingenious, or potentially illegal?
Amy’s Ice Cream founder on Austin’s business risks and rewards #WhyAustin
Turns out a lot of people are in between introverted and extroverted
P. Terry’s founder on the booming economy in Austin #WhyAustin
Ladies and gentlemen, the U.S. National Anthem
Our Great Partners
news neatly in your inbox
Subscribe to our mailing list for news sent straight to your email inbox.
Thank you for subscribing.
Oh boy... Something went wrong.
Business Marketing1 week ago
Bistro owner rewards 1 star reviews to beat Yelps ‘algorithm’ racket
Business Marketing1 week ago
TikToks new augmented reality ads seeks new audiences
Business Entrepreneur2 weeks ago
5 side hustles that could turn into your new career
Business News6 days ago
The final nail has been put in the Jet.com coffin by Walmart
Business Marketing1 week ago
Restaurants might actually lose money through Grubhub and similar services
Business Finance2 weeks ago
Unless you call your representative, the IRS will be forced to screw PPP recipients
Business News2 weeks ago
One big brand got $10M in PPP funding, refuses to return it
Business News1 week ago
Amazon may take advantage of COVID-19 decimated AMC and acquire them