Connect with us

Social Media

Social Media: being a user doesn’t mean you are a good practitioner

Two case studies outline the difference between a seasoned practitioner and a digital manager who fumbled a crisis, and these situations point out that social media is far more complex than tweeting, and hiring for a social media position is even more complicated.

Published

on

Businesses getting in on the trend

Even in a down economy, there are still growth areas in the job market. Some are obvious (unfortunately, like repo guys), but others may not seem so obvious, like those in the social media sector.

Our economy transitioned to a service-based market some time ago, but what some people may not realize is that the explosion of social media properties as communications platforms has “jumped the shark” from individuals to businesses. Business usually follows individuals in the use of social media, but they are catching up. Increasingly, business people are saying “I want some of that.” And by “that,” they meant traffic, awareness and exposure for their company, services or products.

Social Media Help Wanted

Your potential employers are creating jobs for you to fill. Seventy one percent of companies use Facebook, 59 percent use Twitter, 50 percent use blogs, 33 percent use YouTube, 33 percent use message boards and six percent use MySpace (which has fallen off the social media radar). Plus, an anticipated 43 percent of companies will employ a corporate blog in 2012.

Employers who are adopting these tools will need people not only to help them manage these efforts properly, but to use them to achieve communications or marketing objectives. They need seasoned advice from people who understand how social media impacts communications.

Calling All Grown Ups

Along with the desire of businesses to use social media for public relations, public affairs or most likely, marketing, it has created a parallel need for grown-ups: people who not only are familiar with the platforms, but also know enough about them to offer expert counsel to internal (within companies) or external clients (in an agency). In short, it’s one thing to know how to use Facebook, but how do you give advice on how a major brand can carry out B2B (business to business) communication?  It’s not just about status updates and Farmville. If you want to have a successful career as a social media consultant, you need to first be a solid communicator and second, know enough to understand how Facebook would be a good addition to an existing marketing plan or communications mix – including crisis communications.

The Good and the Bad

When people say to me “I have good news, and I have bad news,” I always ask for the bad news first.  I have two examples of how the use of social media both and hurt and helped major corporations. Let’s look at the bad example first.

The bad: Nestle

In March of 2012, Greenpeace turned up the social media heat on Nestle, a global candy manufacturer. It was a campaign against Nestle regarding the company’s use of palm oil in their products (background here in a CNET article). In a concerted effort, thousands of Greenpeace supporters began posting on the company’s Facebook page – over a weekend, when it was likely that an adult was not in charge. Greenpeace urged their supporters to change their profile pictures to something anti-Nestle and top it off with an anti-Nestle comment, posted to the Nestle Facebook wall. Whomever was in charge of the page that weekend did about the worst thing that you can do in that situation. He or she began deleting negative comments and engaged in back-and-forth snark that was, predictably, captured in screen shots by Nestle supporters who then accused the company of censorship. Sample responses from a Nestle rep included responses like “Oh please…it’s like we’re censoring everything to allow only positive comments” didn’t calm things down.

The end result? By putting someone in charge of the Nestle Facebook page over a weekend who didn’t have a clue about crisis communications, they brought a whole lot of publicity a) to Greenpeace’s campaign, and b) unwanted attention to their own company.

The good: JetBlue

In 2011, because of a snowstorm, a JetBlue plane was diverted from Newark to Hartford, Connecticut and sat on the tarmac for over seven hours as the pilots begged airport officials to find a way to get the plane towed to a gate so the passengers could get off. Seven hours. Following the precedent set in 2007 by the JetBlue Chief operating officer founder and CEO David Neeleman when he publicly apologized for another mishap via YouTube, company Chief Operating Officer Rob Maruster apologized via YouTube to the carrier’s customers after hundreds of passengers were stranded on six planes for several hours during a another weekend snowstorm.

Whomever was providing social media advice to JetBlue senior management “got it.”  It’s one thing to offer an apology is a press release, but by offering a senior executive – his face, his voice and his words via YouTube – both crises died down fairly quickly. JetBlue provides good examples of how using social media quickly and effectively can help diffuse a crisis.

The bottom line

The bottom line? The explosion of social media as a business tool is creating job opportunities for seasoned professionals. But being an avid user does not mean that you are ready to start giving online communications advice on a very big stage.

Mark Story is the Director of New Media for the U.S. Securities and Exchange Commission in Washington, DC. He has worked in the social media space for more than 15 years for global public relations firms, most recently, Fleishman-Hillard. Mark has also served as adjunct faculty at Georgetown University and the University of Maryland. Mark is currently writing a book, "Starting a Career in Social Media" due to be published in 2012.

Continue Reading
Advertisement
50 Comments

50 Comments

  1. Steve Veltkamp

    March 6, 2012 at 11:49 am

    The two incidents are not relevant to each other. In one case there was a dedicated and coordinated campaign against a company who did nothing wrong. In the other was a clear reason for a CEO to apologize to a small group of people who were inconvenienced. It would be more helpful to discuss what a company should do when under large attack by social media special interest group.

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media

The FBI has a new division to investigate leaks to the media

(MEDIA) The FBI has launched a division dedicated completely to investigating leaks, and the stats of their progress and formation are pretty surprising…

Published

on

fbi

Expanding its capability to investigate potential governmental leaks to the media, the Federal Bureau of Investigation (FBI) created a new unit to address those threats in 2018.

Documents obtained by TYT as a part of their investigation identify the need for the unit as being due to a “rapid” increase in the number of leaks to the media from governmental sources.

“The complicated nature of — and rapid growth in — unauthorized disclosure and media leak threats and investigations has necessitated the establishment of a new Unit,” one of the released and heavily redacted documents reads.

The FBI appeared to create accounting functions to support the new division, with one document dated in May 2018 revealing that a cost code for the new unit was approved by the FBI’s Resource Analysis Unit.

In August 2017, former Attorney General Jeff Sessions had stated that such a unit had already been formed to address such types of investigations, which he had deemed as being too few in number shortly after taking office in February 2017.

By November of the same year, Sessions claimed that the number of investigations by the Justice Department had increased by 800%, as the Trump administration sought to put an end to the barrage of leaks regarding both personnel and policy that appeared to come from within the ranks of the federal government.

The investigation and prosecution of leaks to the media from government reached a zenith under the Obama administration, using a United States law that originated over 100 years ago in 1917, and was long unused for such purposes.

The Espionage Act treats the unauthorized release of information deemed to be secret in the interests of national security and could be used to harm the interests of the United States or aid an enemy as a criminal act. While controversial in application, the administration used it to prosecute more than twice as many alleged leakers than had been addressed by all previous administrations combined, a total of 10 leak-related prosecutions.

In July 2018, Reality Winner, pled guilty to one felony count of leaking classified information in 2016, representing the first successful prosecution of those who leaked governmental secrets to the media under the Trump administration.

Winner, a former member of the Air Force and a contractor for the National Security Agency at the time of her arrest, was accused of sharing a classified report regarding alleged Russian involvement with the election of 2016 with the news media. Her agreed-upon sentence of 63 months in prison was longer than the average of those convicted for similar crimes, with the typical sentence ranging from one to three and a half years.

Defendants charged under the Espionage Act by the FBI are challenged in mounting their case by the fact that they are prohibited of using a defense of disclosure in the public interest as a defense to their actions.

Continue Reading

Social Media

MeWe – the social network for your inner Ron Swanson

MeWe, a new social media site, seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

Published

on

mute social media

Let’s face it: Facebook is kind of creepy. Between facial recognition technology, demanding your real name, and mining your accounts for data, social media is becoming increasingly invasive. Users have looked for alternatives to mainstream social media that genuinely value privacy, but the alternatives to Facebook have been lackluster.

MeWe is poised to change all of that, if it can muster up a network strong enough to compete with Facebook. On paper, the new social media site seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

MeWe prioritizes privacy in every aspect of the site, and in fact, users are protected by a “Privacy Bill of Rights.” MeWe does not track, mine, or share your data, and does not use facial recognition software or cookies. (In fact, you can take a survey on MeWe to estimate how many cookies are currently tracking you – apparently I have 18 cookies spying on me!)

ron swanson

You don’t have to share that “as of [DATE] my content belongs to me” status anymore.

Everything you post on MeWe belongs to you – the site does not try to claim ownership over your content – and you can download your profile in its entirety at any time. MeWe doesn’t even pester you with advertising. Instead of making money by selling your data (hence the hashtag #Not4Sale) or advertising, the site plans to profit by offering additional paid services, like extra data and bonus apps.

So what does MeWe do? Everything Facebook does, and more. You can share photos and videos, send messages or live chat. You can also attach voice messages to any of your posts, photos, or videos, and you can create Snapchat-like disappearing content.

You can also sync your profile to stash content in your personal storage cloud. Everything you post is protected, and you can fine-tune the permission controls so that you can decide exactly who gets to see your content and who doesn’t – “no creepy stalkers or strangers.”

MeWe is available for Android, iOS, desktops, and tablets.

This story was originally published in January 2016, but the social network suddenly appears to be gaining traction.

Continue Reading

Social Media

Reddit CEO says it’s impossible to police hate speech, and he’s 100% right

(SOCIAL MEDIA) Moderating speech online is a slippery slope, and Reddit’s CEO argues that it’s impossible. Here’s why censorship of hate speech is still so complicated.

Published

on

hate speech online

Reddit often gets a bad rap in the media for being a cesspool of offensive language and breeding grounds for extreme, harmful ideas. This is due in part to the company’s refusal to mediate or ban hate speech.

In fact, Reddit CEO Steve Huffman recently stated that it’s not possible for the company to moderate hate speech. Huffman noted that since hate speech can be “difficult to define,” enforcing a ban would be “a nearly impossible precedent to uphold.”

As lazy as that may sound, anyone who has operated massive online groups (as we do) knows this to be unfortunate but true.

Currently, Reddit policy prohibits “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people […or] that glorifies or encourages the abuse of animals.”

Just about anything else is fair game. Sure, subreddit forums have been shut down in the past, but typically as the result of public pressure. Back in 2015, several subreddits were removed, including ones focused on mocking overweight people, transgender folks, and people of color.

However, other equally offensive subreddits didn’t get the axe. Reddit’s logic was that the company received complaints that the now retired subreddits were harassing others on and offline. Offensive posts are permitted, actual harassment is not.

Huffman previously stated, “On Reddit, the way in which we think about speech is to separate behavior from beliefs.” So posting something horribly racist won’t get flagged unless there’s evidence that users crossed the line from free speech to harassing behavior.

Drawing the line between harassment and controversial conversation is where things get tricky for moderators.

Other social media sites like Facebook, Instagram, and Twitter at least make an attempt, though. So what’s holding Reddit back?

Well, for one, moderating hate speech isn’t a clear cut task.

Right now, AI can’t fully take the reins because to truly put a stop to hate speech, there must be an understanding of both language and intent.

Since current AI isn’t quite there yet, Facebook currently employs actual people for the daunting task. The company mostly relies on overseas contractors, which can get pretty expensive (and can lack understanding of cultural contexts).

Users post millions of comments to Reddit per day, and paying real humans to sift through every potentially offensive or harassing post could break the bank.

Most agree that cost isn’t a relevant excuse, though, so Facebook is looking into buying and developing software specializing in natural language processing as an alternative solution. But right now, Reddit does not seem likely to follow in Facebook’s footsteps.

While Facebook sees itself as a place where users should feel safe and comfortable, Reddit’s stance is that all views are welcome, even potentially offensive and hateful ones.

This April in an AMA (Ask Me Anything) a user straight up asked if obvious racism and slurs are against Reddit’s rules.

Huffman responded in part, “the best defense against racism and other repugnant views both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.”

So essentially, although racism is “not welcome,” it’s also not likely to be banned unless there is associated unacceptable behavior as well.

It’s worth noting that while Reddit as a whole does not remove most hate speech, each subreddit has its own set of rules that may dictate stricter rules. The site essentially operates as an online democracy, with each subreddit “state” afforded the autonomy to enforce differing standards.

Enforcement comes down to moderators, and although some content is clearly hateful, other posts can fall into grey area.

Researches at Berkeley partnered with the Anti-Defamation League recently partnered up to create The Online Hate Index project, an AI program that identifies hate speech. While the program was surprisingly accurate in identifying hate speech, determining intensity of statements was difficult.

Plus, many of the same words are used in hate and non-hate comments. AI and human moderators struggle with defining what crosses the line into hate speech. Not all harmful posts are immediately obvious, and when a forum receives a constant influx of submissions, the volume can be overwhelming for moderators.

While it’s still worth making any effort to foster healthy online communities, until we get a boost to AI’s language processing abilities, complete hate speech moderation may not be possible for large online groups.

Continue Reading
Advertisement

Our Great Parnters

The
American Genius
news neatly in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Emerging Stories