Connect with us

Social Media

Facebook expands Messenger Kids worldwide, and ups safety features

(SOCIAL MEDIA) Facebook’s children video and chat app, Messenger Kids, is launching into new countries and introducing increased security measures to keep them safe.

Published

on

Messenger kids

Facebook is attracting a whole new surge of users, with children attending school online around the world. Facebook use is booming, and people want to keep their children safe. Because of this, Facebook is expanding their Messenger Kids app to more than 70 countries.

They are also trying to make using the app simpler while simultaneously giving increased control over safety measures to the children’s guardians (here referred to as parents/parental controls, as in the app).

Messenger Kids has been around in the U.S. since 2017, and expanded into Canada and Peru in 2018. However, this new rollout makes Messenger Kids available in more than 70 countries, introducing various features in waves. Brazil, India, Japan, and New Zealand are among the new countries with access to Messenger Kids.

Facebook also appears to be making the platform as safe as possible, adding features to keep children safer. Messenger Kids recently had a flaw that allowed children to start group chats without parental knowledge. Thus, Facebook needed to beef up security measures before expanding their children’s platform.

Here’s a look at the three new safety features on Facebook’s Messenger Kids app.

  • Supervised friending: Parents have an option that allows their children to accept, reject, add, or remove friends. The parents are notified, though, so they can review and remove or block any friends they want to. The parents have this control though a Parent Dashboard. Previously, parents had to approve any friend requests themselves directly in their child’s account.
  • Another feature geared for online schooling allows a parent or designated adult to start and invite their children into a group chat. Think teacher, coach, or school director or principal. This allows class or team discussions to proceed online without delay.
  • The other new feature Messenger Kids is unrolling is allowing the child’s photo and profile name to be visible to a select group within the child and their parents’ network, extending to friends of friends, though only with parental permission and only within North America, Central America, and South America.

When deciding which features to add to the Messenger Kids app, Facebook consulted their Youth Advisors. This group, according to Facebook, is “a team of experts in online safety, child development and media…including Safer Internet Day creator Janice Richardson and Agent of Change Foundation chairman Wayne Chau.”

Most adults who allow their kids to use electronic devices with internet access realize that kids are curious, resourceful, and often better at tech than they are. It’s good to see giant communication entities like Facebook working to enhance safety measures for children. Connecting to friends, teachers, classmates, and educational resources is a beautiful thing.

Yet we’ve learned to be wary of Facebook and their aggressive data collection. They must strive to ensure use of their platform isn’t a free fall for the vulnerable into dangerous waters.

Social Media

Instagram announces 3 home feed options, including chronological order

(SOCIAL MEDIA) Instagram is allowing users to choose how their home feed appears so they can tailor their own experience… and chronological is back!

Published

on

Instagram home feed options

Break out the bottle of champagne, because they are bringing back the chronological order in Instagram!

About time, right? Well, that’s not all. Per Protocol, Instagram has announced that they are rolling out three feed options in the first half of 2022. What?! Yes, you read that right.

3 New Feed View Options

  1. Home: This feed view should feel familiar because it’s the algorithm you already use. No changes to this view.
  1. Favorites: This feed view option presents a nice and tidy way to view creators, friends, and family of your choosing.
  1. Following: Last, but not least, is my favorite re-boot, the chronological view of every account that you follow.

Per Protocol, recent legal allegations have been made that Instagram and Facebook have been prioritizing content viewed as harmful in the algorithm and specifically in Instagram. Instagram is widely believed to be harmful to teens. Per the American Psychological Association, “Studies have linked Instagram to depression, body image concerns, self-esteem issues, social anxiety, and other problems”.  They have been under scrutiny by lawmakers and in response are posing the chronological feed as a solution.

However, this won’t fix everything. Even if the algorithm isn’t prioritizing harmful posts, those posts will still exist and if that account is followed it can still be seen. The other issue with this solution is the knowledge that unless Instagram lets you choose your default feed view, they could still cause the algorithm view to be the automatic view. Facebook doesn’t allow you to make the chronological feed your default view. This means you would need to choose that view every time. This bit of friction means there will be times it is overlooked and some may not even know the functionality exists. Knowing this information about Facebook, prepares us for what’s to come with Instagram. After all, Facebook, or Meta, owns both.

While as an entrepreneur, the chronological view excites me, I know the reality of it being used is questionable. I would love to know others can see the products and services I offer instead of hoping that Instagram finds my content worthy to share in the algorithm.

As a human being with a moral conscience, I have to scream, “C’mon Instagram, you CAN do better!” We all deserve better than having a computer pick what’s shown to us. Hopefully, lawmakers will recognize this band-aid quick fix for what it truly is and continue with making real changes to benefit us all.

Continue Reading

Social Media

Facebook’s targeting options for advertising are changing this month

(SOCIAL MEDIA) Do you market your business on Facebook? You need to know that their targeting options for ads are changing and what to do about it.

Published

on

Laptop on lap open to Facebook page representing ad targeting.

Meta is transforming Facebook’s ad campaigns beginning January 19th. Facebook, which has been infamously battling criticism regarding election ads on their platform, is revising its limited targeting ad campaigns. Per this Facebook blog post, these changes eliminate the ability to target users based on interactions with content related to health (e.g., “Lung cancer awareness”, “World Diabetes Day”), race and ethnicity, political affiliation, religious practices (e.g., “Catholic Church” and “Jewish holidays”) and sexual orientation (e.g., “same-sex marriage” and “LGBT culture”).

These changes go into effect on January 19, 2022. Facebook will no longer allow new ads to use these targeting tools after that date. By March 17, 2022, any existing ads using those targeting tools will no longer be allowed.

The VP of Ads and Business Product Marketing at Facebook, Graham Mudd, expressed the belief that personalized ad experiences are the best, but followed up by stating:

“[W]e want to better match people’s evolving expectations of how advertisers may reach them on our platform and address feedback from civil rights experts, policymakers, and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available.”

To help soften the blow, Facebook is offering tips and examples for small businesses, non-profits, and advocacy groups to continue to reach their audiences that go beyond the broad targeting of gender and age.

These tips include creating different types of targeting such as Engagement Custom Audiences, Lookalike Audiences, Website Custom Audiences, Location Targeting, and Customer Lists from a Custom Audience.

Here’s the lowdown on how it will happen.

Per the Search Engine Journal, changes can be made to budget amounts or campaign names without impacting the targeting until March 17th. However, if you go to change the ad set level that will then cause changes at the audience level.

If you need to keep that particular ad to reuse, it may be best to edit the detailed targeting settings before March 17th in order to ensure you can make changes to it in the future.

I believe it was Heraclitus that declared change is constant. Knowing this, we can conclude other social platforms may follow suit and possibly adjust their targeting in the future as well.

Continue Reading

Social Media

Hate speech seemingly spewing on your Facebook? You’re not wrong

(SOCIAL MEDIA) Facebook (now Meta) employees estimate its AI tools only clean up 3%-5% of hate speech on the platform. Surprise, Surprise *eye roll*

Published

on

Facebook being crossed out by a stylus on a mobile device for hate speech.

As Facebook moves further toward Zuckerberg’s Metaverse, concerns about the efficiency with which the company addresses hate speech still remain, with employees recently estimating that only around 2% of offending materials are removed by Facebook’s AI screening tools.

According to Wall Street Journal, internal documents from Facebook show an alarming inability to detect hate speech, violent threats, depictions of graphic content, and other “sensitive” issues via their AI screening. This directly contradicts predictions made by the company in the past.

A “senior engineer” also admitted that, in addition to removing only around 2% of inappropriate material, the odds of that number reaching even a numerical majority is extremely unlikely: “Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.”

The reported efficacy of the AI in question would be laughable were the situation less dire. Reports ranging from AI confusing cockfights and car crashes to inaccurately identifying a car wash video as a first-person shooting are referenced in the internal documents, while far more sobering imagery–live-streamed shootings, viscerally graphic car wrecks, and open threats of violence against transgender children–went entirely unflagged.

Even the system in which the AI works is a source of doubt for employees. “When Facebook’s algorithms aren’t certain enough that content violates the rules to delete it, the platform shows that material to users less often—but the accounts that posted the material go unpunished,” reports Wall Street Journal.

AI has repeatedly been shown to struggle with bias as well. Large Language Models (LLMs)–machine-learning algorithms that inform things like search engine results and predictive text–have defaulted to racist or xenophobic rhetoric when subjected to search terms like “Muslim”, leading to ethical concerns about whether or not these tools are actually capable of resolving things like hate speech.

As a whole, Facebook employees’ doubts about the actual usefulness of AI in removing inappropriate material (and keeping underage users off of the platform) paint a grim portrait of the future of social media, especially as the Metaverse marches steadily forward in mainstream consumption.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!