Connect with us

Social Media

If you love your social media message, set it free – case study

Published

on

Why this matters for social media

One of the most nauseating phrases ever uttered is “If you love something, set it free… If it comes back, it’s yours, if it doesn’t, it never was yours.” I am not sure to whom to attribute this, but I apologize in advance for even bringing it up.

But when it comes to social media, it matters. A lot. Set it free.

I am in the middle of AGBeat columnist Maddie Grant and Jamie Notter’s book, “Humanize” in which, early on, they make the case that upper-level management in organizations need to understand that participating meaningfully in social media – and being authentic and human – means letting go of the message. You can’t control it anymore, and if you are a large organization, the discussion is already taking place, like the U2 song, With or Without You.

It used to be that the way that large and medium-sized organizations got their messages out was through a one-to-many communications method, most of the time a press release. The intended effect was to create an inverted funnel in which the organization would control the message, the timing as well as the reaction. Gone, gone, gone.

Today’s communications environment

I like to compare today’s communications environment (heavily influenced by social media) as like being in the middle of a tornado. The discussions, debates, arguments and the like are swirling around you. They are taking place. Through being authentic and “human” (thanks Maddie and Jaime), organizations can hope to (at times) participate in the wind and sometimes even slightly redirect it, but you can’t stop a tornado.

When major corporations attempt to violate the spirit of social media – being authentic, listening and participating in conversations with customers or other stakeholders – bad things happen.

What bad things happen?

Last week, I was listening to my favorite podcast, “For Immediate Release” during which the hosts, Shel Holtz and Neville Hobson, discussed a Google+ comment by Scott Monty, the head of social media at Ford Motor Company. Scott’s Google+ post (which as I write this, is no longer available at its original link) read:

“Shel & Neville – not sure if you guys have covered this on the show, but what are your thoughts on companies posting their own Terms of Use on Facebook? I noticed this one because someone called out that National doesn’t allow UGC [user generated content] that criticizes them. Our own legal department is concerned, because FB’s TOS are designed to protect FB, not brands.”

Shel and Neville went on to discuss that purportedly, National Car Rental was deleting Facebook Wall posts that were negative towards the company. This is a serious social media infraction. It violates the spirit of creating conversation and ruins and opportunity to engage with customers and offer the 33,638 people that have “liked” the company a front-row view of an organization that is open, honest and willing to take on problems. Note: I have no third-party confirmation of National doing this, but I did find this in their terms of on the Facebook page:

You may not post any User Content that:

  • Infringes any person’s legal rights, including any right of privacy and publicity
  • Is defamatory, infringing, abusive, obscene, indecent, deceptive, threatening, harassing, misleading or unlawful;
  • Contains any code, application, software or material protected by intellectual property laws or any malicious code including any programs that may damage the operation of another person’s computer or which contains any other form of virus or malware;
  • Disparages, slanders, criticizes, or maligns National;
  • Is commercial in nature and advertises any product, service, or good other than National, unless you have obtained National’s prior consent.
  • You will not rely upon any claim or statement made, or anything contained in any User Content. This Facebook Fan Page is for entertainment only. It is not an authorized source of information about National or our brands, vehicles, or services. If you are looking for that type of information, please visit our website at www.nationalcar.com.

The bullet point “disparages, slanders, criticizes, or maligns National” is the one that caught my attention. What if you get a National car that is a clunker, you are on your way to an important meeting and the car dies? Does that mean that you cannot, in a public way on a platform set up by National, “criticize” the company?

If you can “like” them, why can’t you, publicly, “dislike” them? If this is the case, I am wondering why National even bothers having a Wall if it is pre-ordained that everything will be sunshine and chocolate, and if not, potentially removed.

The takeaway:

My final point: Scott Monty is well known and well respected as the head of social media at Ford. National Car Rental rents Ford cars. When I went to look for the original Google+ post (I found what I have posted above on the FIR Google+ account), the comment had been removed. It could well be a technical glitch. I hope so.

I believe that Scott truly gets social media, but I sincerely hope that his comment was not censored by National or Ford for “disparaging, slandering, criticizing, or maligning National.”

So if you love social media, if you expect to listen, participate, be authentic and human as well as respond to consumer complaints on a platform that your company set up, National Car Rental, if you love Facebook, set it free.

Mark Story is the Director of New Media for the U.S. Securities and Exchange Commission in Washington, DC. He has worked in the social media space for more than 15 years for global public relations firms, most recently, Fleishman-Hillard. Mark has also served as adjunct faculty at Georgetown University and the University of Maryland. Mark is currently writing a book, "Starting a Career in Social Media" due to be published in 2012.

Social Media

The FBI has a new division to investigate leaks to the media

(MEDIA) The FBI has launched a division dedicated completely to investigating leaks, and the stats of their progress and formation are pretty surprising…

Published

on

fbi

Expanding its capability to investigate potential governmental leaks to the media, the Federal Bureau of Investigation (FBI) created a new unit to address those threats in 2018.

Documents obtained by TYT as a part of their investigation identify the need for the unit as being due to a “rapid” increase in the number of leaks to the media from governmental sources.

“The complicated nature of — and rapid growth in — unauthorized disclosure and media leak threats and investigations has necessitated the establishment of a new Unit,” one of the released and heavily redacted documents reads.

The FBI appeared to create accounting functions to support the new division, with one document dated in May 2018 revealing that a cost code for the new unit was approved by the FBI’s Resource Analysis Unit.

In August 2017, former Attorney General Jeff Sessions had stated that such a unit had already been formed to address such types of investigations, which he had deemed as being too few in number shortly after taking office in February 2017.

By November of the same year, Sessions claimed that the number of investigations by the Justice Department had increased by 800%, as the Trump administration sought to put an end to the barrage of leaks regarding both personnel and policy that appeared to come from within the ranks of the federal government.

The investigation and prosecution of leaks to the media from government reached a zenith under the Obama administration, using a United States law that originated over 100 years ago in 1917, and was long unused for such purposes.

The Espionage Act treats the unauthorized release of information deemed to be secret in the interests of national security and could be used to harm the interests of the United States or aid an enemy as a criminal act. While controversial in application, the administration used it to prosecute more than twice as many alleged leakers than had been addressed by all previous administrations combined, a total of 10 leak-related prosecutions.

In July 2018, Reality Winner, pled guilty to one felony count of leaking classified information in 2016, representing the first successful prosecution of those who leaked governmental secrets to the media under the Trump administration.

Winner, a former member of the Air Force and a contractor for the National Security Agency at the time of her arrest, was accused of sharing a classified report regarding alleged Russian involvement with the election of 2016 with the news media. Her agreed-upon sentence of 63 months in prison was longer than the average of those convicted for similar crimes, with the typical sentence ranging from one to three and a half years.

Defendants charged under the Espionage Act by the FBI are challenged in mounting their case by the fact that they are prohibited of using a defense of disclosure in the public interest as a defense to their actions.

Continue Reading

Social Media

MeWe – the social network for your inner Ron Swanson

MeWe, a new social media site, seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

Published

on

mute social media

Let’s face it: Facebook is kind of creepy. Between facial recognition technology, demanding your real name, and mining your accounts for data, social media is becoming increasingly invasive. Users have looked for alternatives to mainstream social media that genuinely value privacy, but the alternatives to Facebook have been lackluster.

MeWe is poised to change all of that, if it can muster up a network strong enough to compete with Facebook. On paper, the new social media site seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

MeWe prioritizes privacy in every aspect of the site, and in fact, users are protected by a “Privacy Bill of Rights.” MeWe does not track, mine, or share your data, and does not use facial recognition software or cookies. (In fact, you can take a survey on MeWe to estimate how many cookies are currently tracking you – apparently I have 18 cookies spying on me!)

ron swanson

You don’t have to share that “as of [DATE] my content belongs to me” status anymore.

Everything you post on MeWe belongs to you – the site does not try to claim ownership over your content – and you can download your profile in its entirety at any time. MeWe doesn’t even pester you with advertising. Instead of making money by selling your data (hence the hashtag #Not4Sale) or advertising, the site plans to profit by offering additional paid services, like extra data and bonus apps.

So what does MeWe do? Everything Facebook does, and more. You can share photos and videos, send messages or live chat. You can also attach voice messages to any of your posts, photos, or videos, and you can create Snapchat-like disappearing content.

You can also sync your profile to stash content in your personal storage cloud. Everything you post is protected, and you can fine-tune the permission controls so that you can decide exactly who gets to see your content and who doesn’t – “no creepy stalkers or strangers.”

MeWe is available for Android, iOS, desktops, and tablets.

This story was originally published in January 2016, but the social network suddenly appears to be gaining traction.

Continue Reading

Social Media

Reddit CEO says it’s impossible to police hate speech, and he’s 100% right

(SOCIAL MEDIA) Moderating speech online is a slippery slope, and Reddit’s CEO argues that it’s impossible. Here’s why censorship of hate speech is still so complicated.

Published

on

hate speech online

Reddit often gets a bad rap in the media for being a cesspool of offensive language and breeding grounds for extreme, harmful ideas. This is due in part to the company’s refusal to mediate or ban hate speech.

In fact, Reddit CEO Steve Huffman recently stated that it’s not possible for the company to moderate hate speech. Huffman noted that since hate speech can be “difficult to define,” enforcing a ban would be “a nearly impossible precedent to uphold.”

As lazy as that may sound, anyone who has operated massive online groups (as we do) knows this to be unfortunate but true.

Currently, Reddit policy prohibits “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people […or] that glorifies or encourages the abuse of animals.”

Just about anything else is fair game. Sure, subreddit forums have been shut down in the past, but typically as the result of public pressure. Back in 2015, several subreddits were removed, including ones focused on mocking overweight people, transgender folks, and people of color.

However, other equally offensive subreddits didn’t get the axe. Reddit’s logic was that the company received complaints that the now retired subreddits were harassing others on and offline. Offensive posts are permitted, actual harassment is not.

Huffman previously stated, “On Reddit, the way in which we think about speech is to separate behavior from beliefs.” So posting something horribly racist won’t get flagged unless there’s evidence that users crossed the line from free speech to harassing behavior.

Drawing the line between harassment and controversial conversation is where things get tricky for moderators.

Other social media sites like Facebook, Instagram, and Twitter at least make an attempt, though. So what’s holding Reddit back?

Well, for one, moderating hate speech isn’t a clear cut task.

Right now, AI can’t fully take the reins because to truly put a stop to hate speech, there must be an understanding of both language and intent.

Since current AI isn’t quite there yet, Facebook currently employs actual people for the daunting task. The company mostly relies on overseas contractors, which can get pretty expensive (and can lack understanding of cultural contexts).

Users post millions of comments to Reddit per day, and paying real humans to sift through every potentially offensive or harassing post could break the bank.

Most agree that cost isn’t a relevant excuse, though, so Facebook is looking into buying and developing software specializing in natural language processing as an alternative solution. But right now, Reddit does not seem likely to follow in Facebook’s footsteps.

While Facebook sees itself as a place where users should feel safe and comfortable, Reddit’s stance is that all views are welcome, even potentially offensive and hateful ones.

This April in an AMA (Ask Me Anything) a user straight up asked if obvious racism and slurs are against Reddit’s rules.

Huffman responded in part, “the best defense against racism and other repugnant views both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.”

So essentially, although racism is “not welcome,” it’s also not likely to be banned unless there is associated unacceptable behavior as well.

It’s worth noting that while Reddit as a whole does not remove most hate speech, each subreddit has its own set of rules that may dictate stricter rules. The site essentially operates as an online democracy, with each subreddit “state” afforded the autonomy to enforce differing standards.

Enforcement comes down to moderators, and although some content is clearly hateful, other posts can fall into grey area.

Researches at Berkeley partnered with the Anti-Defamation League recently partnered up to create The Online Hate Index project, an AI program that identifies hate speech. While the program was surprisingly accurate in identifying hate speech, determining intensity of statements was difficult.

Plus, many of the same words are used in hate and non-hate comments. AI and human moderators struggle with defining what crosses the line into hate speech. Not all harmful posts are immediately obvious, and when a forum receives a constant influx of submissions, the volume can be overwhelming for moderators.

While it’s still worth making any effort to foster healthy online communities, until we get a boost to AI’s language processing abilities, complete hate speech moderation may not be possible for large online groups.

Continue Reading
Advertisement

Our Great Parnters

The
American Genius
news neatly in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Emerging Stories