The year is 2020 and the new generation has grown up with the internet. For the most part this has created immense potential for progress in the world. People who would have had to travel miles either on foot or car to learn something, can get it at their fingertips. They can speak with someone on the other side of the globe in real time. The possibilities are limitless on its applications for the human existence, but where brightness exists, we must also deal with the darkness.
Social Media, originally a means of reaching across distance to stay in touch, has turned into the yellow journalism of this era. From 45s inflammatory tweets, and the everyday machinations of children exploring the world online to worldwide news, the internet can do just about anything. But as we see with just about everything with people in general, when we get a shortcut for something, we tend to not follow everything we’ve learned up until that point. Communication skills and the niceties of human interaction are completely stripped bare when people get into heated conversations online. This can be seen playing out in its entirety on Reddit through the last few months.
There was a situation that went from a flashy headline to shaking the entire platform of the site. Just to make sure that everyone’s on the same page, I want to go over the breakdown of Reddit’s personnel structure. The paid people for Reddit are referred to as the Admins. They have one job, making sure the rules of Reddit are followed. These rules are short, simple, and really only apply to the content that is posted, with a few additional topics. The real heavy lifters for Reddit are the Moderators (Mods). What they do will become important to the story in a moment. The last level are the Users, which are just the everyday people who post whatever they want, (within the rules) and can comment on any of the subreddits that they are involved with.
So, as we look at the construction of Reddit, it’s just a website platform. They allow their Users to create boards, called subreddits, which are their own little worlds to control. When you create a subreddit, using any available name you want and defined by an r/, you instantly become the lead Mod on it, and can set whatever rules you want for it. If you want a column of posts that only has intense technical discussions on the physical mechanics of “My Little Pony”, you do you boo! If you can only handle memes of kittens in your online existence, then prepare for cuteness.
Now those rules I mentioned, don’t stop people from posting but it gives you the power to actually remove things from your subreddit if you choose to. To help you, because some of these subreddits reach tens of millions of individuals daily, you can add as many Moderators as you want to your subreddit. There have even developed, what can be best described as, professional moderators. Individuals who are working on so many different subreddits that their experience is seen as invaluable for someone wanting to keep a good handle on their little section of the internet.
On March 16th a Reddit user took it upon himself to compile a list of popular subreddits in a column, and then next to it a list of some of the moderators from each of those subreddits. He then decided to name the article “92 of top 500 subreddits are controlled by just 4 people”. Just another attempt at click bait to get his name getting karma points, points you earn from posts and comments from other users. However, while this was a heavily misleading post on a few points, it hit the website hard because of the implication of users being controlled by a small group of “tyrants”. As the weeks continued this same article was seen in a number of other Reddit-hating subreddits, yes you read that correctly i.e r/subredditcancer etc. It may have eventually fallen into anonymity, but it was kicked back into the spotlight again a few weeks later.
A well-known user submitted the post to three subreddits, whose combined overall subscribers numbered well above 8 million. At that point it went viral, becoming for a short time, the most popular post on Reddit. Then the moderators made a mistake, in my humble opinion, they removed that users post without an explanation, and then they were banned from one of the subreddits that they were part of but hadn’t been active in for a number of months. After that the snowball started going, one subreddit after another started banning the aforementioned user. On May 12th the user was finally suspended from Reddit altogether. While this was to hopefully stem the blood flow, it actually didn’t. Other users took up the ‘cause’.
A rhythm starts happening all over reddit. The list gets posted and then taken down. The moderators would either give no cause for the deletion, or actually just give a half-hearted effort. Communications between moderators seemed to reveal a movement to keep this post down because of the lengthy and uncivil arguments that it would invoke in the majority of the subreddits that it cropped up in. It came to a point when moderators started getting death threats from users, and the Administrators finally stepped in and put a stop to things.
This event put into stark contrast how situations were handled and some people are learning to adjust. While others are making a different decision. One of the original 5 mentioned moderators opted to delete his entire profile. This is a person who has been building their brand for nine years, and the amount of hatred and vitriol that he had to go through caused him to abandon hundreds if not thousands of hours of work. Sounds like a giant waste of an experienced person to me, and it’s a little sad.
This whole situation is very indicative of how people act on social media. Someone either wants fame, actually believes it, or just wants to cause chaos. They push a trumped up, poorly made article, which doesn’t explore all the information available, into the faces of the populace at large. It’s a preposterous notion for anyone who actually thinks this situation through, but no one actually cares about that description. They only pay attention to the headline and picture. If it holds up to a shred of sense, then they will run with it. Then when the moderators started deleting posts, they unintentionally made it more real. At that point it becomes what my generation affectionately calls, a “dumpster fire.”
I believe that experiences like this are what shoves people away from social media, and gives it a bad name. You have people who have forgotten all the decorum of talking to someone in person. They somehow believe that death threats are what’s required when they don’t get to post the picture of an image they want. The hope from me at least is that people start remembering that every user on the end of a post is also a person. That they have feelings, emotions, and desires just like they do.
I’m not holding my breath for large amounts of change anytime soon, particularly on social media, but I will continue to hope that the hundreds of years we’ve put into communicating with each other in person, from cave paintings to smoke signals, gets adapted to the online world.
Instagram announces 3 home feed options, including chronological order
(SOCIAL MEDIA) Instagram is allowing users to choose how their home feed appears so they can tailor their own experience… and chronological is back!
Break out the bottle of champagne, because they are bringing back the chronological order in Instagram!
About time, right? Well, that’s not all. Per Protocol, Instagram has announced that they are rolling out three feed options in the first half of 2022. What?! Yes, you read that right.
3 New Feed View Options
- Home: This feed view should feel familiar because it’s the algorithm you already use. No changes to this view.
- Favorites: This feed view option presents a nice and tidy way to view creators, friends, and family of your choosing.
- Following: Last, but not least, is my favorite re-boot, the chronological view of every account that you follow.
Per Protocol, recent legal allegations have been made that Instagram and Facebook have been prioritizing content viewed as harmful in the algorithm and specifically in Instagram. Instagram is widely believed to be harmful to teens. Per the American Psychological Association, “Studies have linked Instagram to depression, body image concerns, self-esteem issues, social anxiety, and other problems”. They have been under scrutiny by lawmakers and in response are posing the chronological feed as a solution.
However, this won’t fix everything. Even if the algorithm isn’t prioritizing harmful posts, those posts will still exist and if that account is followed it can still be seen. The other issue with this solution is the knowledge that unless Instagram lets you choose your default feed view, they could still cause the algorithm view to be the automatic view. Facebook doesn’t allow you to make the chronological feed your default view. This means you would need to choose that view every time. This bit of friction means there will be times it is overlooked and some may not even know the functionality exists. Knowing this information about Facebook, prepares us for what’s to come with Instagram. After all, Facebook, or Meta, owns both.
While as an entrepreneur, the chronological view excites me, I know the reality of it being used is questionable. I would love to know others can see the products and services I offer instead of hoping that Instagram finds my content worthy to share in the algorithm.
As a human being with a moral conscience, I have to scream, “C’mon Instagram, you CAN do better!” We all deserve better than having a computer pick what’s shown to us. Hopefully, lawmakers will recognize this band-aid quick fix for what it truly is and continue with making real changes to benefit us all.
Facebook’s targeting options for advertising are changing this month
(SOCIAL MEDIA) Do you market your business on Facebook? You need to know that their targeting options for ads are changing and what to do about it.
Meta is transforming Facebook’s ad campaigns beginning January 19th. Facebook, which has been infamously battling criticism regarding election ads on their platform, is revising its limited targeting ad campaigns. Per this Facebook blog post, these changes eliminate the ability to target users based on interactions with content related to health (e.g., “Lung cancer awareness”, “World Diabetes Day”), race and ethnicity, political affiliation, religious practices (e.g., “Catholic Church” and “Jewish holidays”) and sexual orientation (e.g., “same-sex marriage” and “LGBT culture”).
These changes go into effect on January 19, 2022. Facebook will no longer allow new ads to use these targeting tools after that date. By March 17, 2022, any existing ads using those targeting tools will no longer be allowed.
The VP of Ads and Business Product Marketing at Facebook, Graham Mudd, expressed the belief that personalized ad experiences are the best, but followed up by stating:
“[W]e want to better match people’s evolving expectations of how advertisers may reach them on our platform and address feedback from civil rights experts, policymakers, and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available.”
To help soften the blow, Facebook is offering tips and examples for small businesses, non-profits, and advocacy groups to continue to reach their audiences that go beyond the broad targeting of gender and age.
These tips include creating different types of targeting such as Engagement Custom Audiences, Lookalike Audiences, Website Custom Audiences, Location Targeting, and Customer Lists from a Custom Audience.
Here’s the lowdown on how it will happen.
Per the Search Engine Journal, changes can be made to budget amounts or campaign names without impacting the targeting until March 17th. However, if you go to change the ad set level that will then cause changes at the audience level.
If you need to keep that particular ad to reuse, it may be best to edit the detailed targeting settings before March 17th in order to ensure you can make changes to it in the future.
I believe it was Heraclitus that declared change is constant. Knowing this, we can conclude other social platforms may follow suit and possibly adjust their targeting in the future as well.
Hate speech seemingly spewing on your Facebook? You’re not wrong
(SOCIAL MEDIA) Facebook (now Meta) employees estimate its AI tools only clean up 3%-5% of hate speech on the platform. Surprise, Surprise *eye roll*
As Facebook moves further toward Zuckerberg’s Metaverse, concerns about the efficiency with which the company addresses hate speech still remain, with employees recently estimating that only around 2% of offending materials are removed by Facebook’s AI screening tools.
According to Wall Street Journal, internal documents from Facebook show an alarming inability to detect hate speech, violent threats, depictions of graphic content, and other “sensitive” issues via their AI screening. This directly contradicts predictions made by the company in the past.
A “senior engineer” also admitted that, in addition to removing only around 2% of inappropriate material, the odds of that number reaching even a numerical majority is extremely unlikely: “Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.”
The reported efficacy of the AI in question would be laughable were the situation less dire. Reports ranging from AI confusing cockfights and car crashes to inaccurately identifying a car wash video as a first-person shooting are referenced in the internal documents, while far more sobering imagery–live-streamed shootings, viscerally graphic car wrecks, and open threats of violence against transgender children–went entirely unflagged.
Even the system in which the AI works is a source of doubt for employees. “When Facebook’s algorithms aren’t certain enough that content violates the rules to delete it, the platform shows that material to users less often—but the accounts that posted the material go unpunished,” reports Wall Street Journal.
AI has repeatedly been shown to struggle with bias as well. Large Language Models (LLMs)–machine-learning algorithms that inform things like search engine results and predictive text–have defaulted to racist or xenophobic rhetoric when subjected to search terms like “Muslim”, leading to ethical concerns about whether or not these tools are actually capable of resolving things like hate speech.
As a whole, Facebook employees’ doubts about the actual usefulness of AI in removing inappropriate material (and keeping underage users off of the platform) paint a grim portrait of the future of social media, especially as the Metaverse marches steadily forward in mainstream consumption.
Business Articles6 days ago
100+ inspirational quotes to motivate you to have prosperous new year
Business News5 days ago
80 reasons why you didn’t get the job interview or offer (brutally honest)
Business Marketing4 days ago
10 must-listen-to podcasts for business owners
Opinion Editorials7 days ago
Do these 3 things if you TRULY want to be an ally to women in tech
Opinion Editorials2 weeks ago
How to excel in your next remote job interview
Opinion Editorials1 week ago
Does your creativity dwindle as you get older? Science says its possible
Business Marketing2 weeks ago
Why your coworkers are not your ‘family’ [unpopular opinion]
Business News2 weeks ago
Get what you want through negotiation and persuasion, sans aggression