Connect with us

Social Media

Social media managers build and destroy brands: social media politics

Social media managers are very human, even the best of the bunch, and politics can stand between your brand and business opportunities if you’re not paying attention.





Social media managers and your brand’s social capital

It’s great to have an in-house social media manager. They’re out front with your corporate social capital, traveling from conference to conference, monitoring private and public groups on Facebook and forums, with an eyeball on Google alerts just in case someone says something mean or nice about your brand, and hopefully the social media manager will hop into the conversation and demonstrate the company’s corporate culture. This is ideal, but in actuality, there is a dark and seedy side of social media.

There is no doubt that the sophistication of people media has grown by leaps and bounds, from Yahoo chatrooms and IRC (Internet Relay Chat) before it, to the mainstream social networks of today – life online is no longer something left to gamers, hackers, and cheaters. It’s completely acceptable today to meet the love of your life online, connect offline, and possibly carry on very normal relationships. It’s social, right? But it’s more than that, it’s a society, with businesses and consumers, as well as haves and have nots.

Social media managers as high school hall monitors

The less the medium seems outside of this world, the more it is exactly of this world. Social media representatives, and even corporate or product ambassadors wear brands like a badge of honor, much like the hall monitor in high school, the decider of who passes through the halls, and who gets sent to the office.

Even worse for business, using this analogy, your ambassador or manager decides what relationships your company builds or those they don’t. Often, they act as a relationship gatekeeper, taking it upon themselves to single-handedly determine who may or may not interact with your brand, whether it is in line with your corporate vision or not.

Social media managers can be seen as bullies

Companies or organizations that are less than popular and have reason to be defensive often are. Ambassadors and managers are seen as bullies regardless of whether it’s their mandate to do so. Shouting down constituents, non-believers, and avoiders of the Koolaid are scoffed at, belittled or simply silenced.

The problem here is that it’s their job to build fans and correct information, but in a way consistent with how the company would handle it, not necessarily the very human behavior displayed by the ambassador or brand manager employed to manage your social media. I use the word human because although they may be technically savvy and you as a Corporate Officer are not, you’re putting an awful lot of faith in that human to remain objective in a very sensitive environment.

I’ve personally seen corporate relationships die because the ambassador or social manager doesn’t understand the larger picture of a potential relationship of the other human they’re engaging.

Stopping your social media manager’s clique behavior

How do you stop this sort of cliquish behavior? How do you protect your brand from the short sighted product or brand ambassador, or even a low level social media manager with a big brand name to boost their ego?

First, you should be equally as savvy as your savvy social media manager, not just regarding technology but regarding what your employee is up to online, good and bad.

Secondly, you should have Google alerts set up for your social media manager’s name and blogs so you can monitor their actions online for yourself. Additionally, you should join the same groups and forums as your social media manager, to at least have access to private conversations within those groups, where related to business. After all, you are the face of the company, aren’t you?

Lastly, depend not on your brand ambassador or manager to make decisions regarding relationships or who reaches the company – that’s never been their job. But it is certainly your job to trust but verify the advice and opinions of your manager – not all social media managers are bad, but are you certain about yours?

Benn Rosales is the Founder and CEO of The American Genius (AG), national news network for tech and entrepreneurs, proudly celebrating 10 years in publishing, recently ranked as the #5 startup in Austin. Before founding AG, he founded one of the first digital media strategy firms in the nation and also acquired several other firms. His resume prior includes roles at Apple and Kroger Foods, specializing in marketing, communications, and technology integration. He is a recipient of the Statesman Texas Social Media Award and is an Inman Innovator Award winner. He has consulted for numerous startups (both early- and late-stage), has built partnerships and bridges between tech recruiters and the best tech talent in the industry, and is well known for organizing the digital community through popular monthly networking events. Benn does not venture into the spotlight often, rather believes his biggest accomplishments are the talent he recruits, develops, and gives all credit to those he's empowered.

Continue Reading


  1. Danny Brown

    January 7, 2013 at 1:29 pm

    Or, you just hire good people. The role doesn’t make the behaviour, the person does. 😉

    • agbenn

      January 7, 2013 at 1:34 pm

      I’ll raise you “Power corrupts; absolute power corrupts absolutely”

      • Danny Brown

        January 7, 2013 at 1:57 pm

        I’ll see your raise and add “Only if the behaviour of that person would let them be corrupted to start with.” I’ve seen people get jobs they weren’t quite ready for but it made them grow up and the company grew because of it. Can power corrupt? For sure. Does it corrupt always? No.

        • agbenn

          January 7, 2013 at 7:49 pm

          I default to trust but verify 🙂 Hire/fire accordingly. We know more today than yesterday, and many decisions made early on should be evaluated. And as I said, not all are bad, but they just can’t help themselves sometimes. Learn a lesson or not, I’ve seen some pretty expensive lessons learned in real time. If pandering for brand fame (find the influencers, and co-opt them) is the objective, it’s already political.

  2. Annette Jett

    January 9, 2013 at 5:52 pm

    So true! I have seen this happen, and avoid the brands that have ‘clique’ ambassadors. I didn’t go for that in my school years, and I certainly don’t now as an adult. You point out the ‘ego.’ I would have to say it is the number one offender for ruining relationships- whether it is personal or professional. Keep that in tact, and everything else should run smoothly.

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media

Can you legally monitor your employees’ online activities? Kinda

(SOCIAL MEDIA) Are they ways you are monitoring your employees online even legal? Did you know there are illegal methods? Yep.



remote workers

Edward Snowden’s infamous info leak in 2013 brought to light the scope of surveillance measures, raising questions about legality of monitoring tactics. However, the breach also opened up broader discussion on best practices for protecting sensitive data.

No company wants to end up with a data breach situation on their hands, but businesses need to be careful when implementing monitoring systems to prevent data loss.

Monitoring your employee’s activity online can be a crucial part of safeguarding proprietary data. However, many legal risks are present when implementing data loss prevention (DLP) methods.

DLP tools like keystroke logging, natural language processing, and network traffic monitoring are all subject to federal and state privacy laws. Before putting any DLP solutions in place, companies need to assess privacy impact and legal risks.

First, identify your monitoring needs. Different laws apply to tracking data in transit versus data at rest. Data in transit is any data moving through a network, like sending an email. The Electronic Communications Privacy Act (ECPA) requires consent for tracking any data in transit.

Data at rest is anything relatively immobile, like information stored in a database or archives. Collecting data at rest can fall under the Stored Communications Act (SCA), which typically prohibits unauthorized access or disclosure of electronic communications.

While the SCA does not usually prevent employers from accessing their own systems, monitoring things like Gmail accounts could get messy without proper authorization.

Who you’re tracking matters as well regarding consent and prior notification. If you’re just monitoring your own employees, you may run into disclosure issues. Some states, like Delaware and Connecticut, prohibit employee monitoring without prior notice.

The ECPA also generally prohibits tracking electronic communication, but exceptions are granted for legitimate business purposes so long as consent is obtained.

Monitoring third party communications can get tricky with wiretapping laws. In California and Illinois, all parties must be notified of any tracking. This can involve disclosures on email signatures from outbound employee emails, or a broad notification on the company’s site.

Implied consent comes from third parties continuing communication even with disclaimers present.

If you’re wanting to install DLP software on personal devices used for work, like a company cellphone, you could face a series of fines for not gaining authorization. Incorrect implementation may fall under spyware and computer crime laws.

With any DLP tools and data monitoring, notification and consent are crucial. When planning monitoring, first assess what your privacy needs are, then identify potential risks of implementing any tracking programs.

Define who, where, and why DLP software will apply, and make sure every employee understands the need for tracking. Include consent in employee onboarding, and keep employees updated with changes to your monitoring tactics.

Protecting your company’s data is important, but make sure you’re not unintentionally bending privacy laws with your data loss prevention methods. Regularly check up on your approaches to make sure everything is in compliance with monitoring laws.

Continue Reading

Social Media

Should social media continue to self-regulate, or should Uncle Sam step in?

(MEDIA) Should social media platforms be allowed to continue to regulate themselves or should governments continue to step in? Is it an urgency, or a slippery slope?



broadband adoption

Last week, Instagram, Whatsapp, and Facebook suffered a massive outage around the world that lasted for most of the day. In typical Internet fashion, frustrated users took to Twitter to vent their feelings. A common thread throughout all of the dumpster fire gifs was the implication that these social media platforms were a necessary outlet for connecting people with information—as well as being an emotional outlet for whatever they felt like they needed to share.

It’s this dual nature of social media, both as a vessel for content that people consume, as well as a product that they share personal data with (for followers, but also knowing that the data is collected and analyzed by the companies) that confuses people as to what these things actually are. Is social media a form of innovative technology, or is it more about the content, is it media? Is it both?

Well, the answer depends on how you want to approach it.

Although users may say that content is what keeps them using the apps, the companies themselves purport that the apps are technology. We’ve discussed this distinction before, and how it means that the social media giants get to skirt around having more stringent regulation. 

But, as many point out, if the technology is dependent on content for its purpose (and the companies’ profit): where does the line between personal information and corporate data mining lie?

Should social media outlets known for their platform being used to perpetuate “fake news” and disinformation be held to higher standards in ensuring that the information they spread is accurate and non-threatening?

As it currently stands, social media companies don’t have any legislative oversight—they operate almost exclusively in a state of self-regulation.  This is because they are classified as technology companies rather than media outlets.

This past summer, Senator Mark Warner from Virginia suggested that social media, such as Twitter, Facebook, and Instagram, needed regulation in a widely circulated white paper. Highlighting the scandal by Cambridge Analytica which rocked the polls and has underscored the potential of social media to sway real-life policy by way of propaganda,

Warner suggested that lawmakers target three areas for regulation: fighting politically oriented misinformation, protecting user privacy, and promoting competition among Internet markets that will make long-term use of the data collected from users.

Warner isn’t the only person who thinks that social media’s current state of self-regulation unmoored existence is a bit of a problem, but the problem only comes from what would be considered a user-error: The people using social media have forgotten that they are the product, not the apps.

Technically, many users of social media have signed their privacy away by clicking “accept” on terms and conditions they haven’t fully read.* The issues of being able to determine whether or not a meme is Russian propaganda isn’t a glitch in code, it’s a way to exploit media illiteracy and confirmation bias.

So, how can you regulate human behavior? Is it on the tech companies to try and be better than the tendencies of the people who use them? Ideally they wouldn’t have to be told not to take advantage of people, but when people are willingly signing up to be taken advantage of, who do you target?

It’s a murky question, and it’s only going to get trickier to solve the more social media embeds itself into our culture.

*Yes, I’m on social media and I blindly clicked it too! He who is without sin, etc.

Continue Reading

Social Media

Deepfakes can destroy any reputation, company, or country

(MEDIA) Deepfakes have been around for a few years now, but they’re being crafted for nefarious purposes beyond the original porn and humor uses.




Deepfakes — a technology originally used by Reddit perverts who wanted to superimpose their favorite actresses’ faces onto the bodies of porn stars – have come a long way since the original Reddit group was banned.

Deepfakes use artificial intelligence (AI) to create bogus videos by analyzing facial expressions to replace one person’s face and/or voice with another’s.

Using computer technology to synthesize videos isn’t exactly new.

Remember in Forrest Gump, how Tom Hanks kept popping up in the background of footage of important historical events, and got a laugh from President Kennedy? It wasn’t created using AI, but the end result is the same. In other cases, such technology has been used to complete a film when an actor dies during production.

The difference between these examples and that latest deepfake technology is a question of ease and access.

Historically, these altered videos have required a lot of money, patience, and skill. But as computer intelligence has advanced, so too has deepfake technology.

Now the computer does the work instead of the human, making it relatively fast and easy to create a deepfake video. In fact, Stanford created a technology using a standard PC and web cam, as I reported in 2016.

Nowadays, your average Joe can access open source deepfake apps for free. All you need is some images or video of your victim.

While the technology has mostly been used for fun – such as superimposing Nicolas Cage into classic films – deepfakes could and have been used for nefarious purposes.

There is growing concern that deepfakes could be used for political disruption, for example, to smear a politician’s reputation or influence elections.

Legislators in the House and Senate have requested that intelligence agencies report on the issue. The Department of Defense has already commissioned researchers to teach computers to detect deepfakes.

One promising technology developed at the University of Albany analyzes blinking to detect deep fakes, as subjects in the faked videos usually do not blink as often as real humans do. Ironically, in order to teach computers how to detect them, researchers must first create many deepfake videos. It seems that deepfake creators and detectors are locked in a sort of technological arms race.

The falsified videos have the potential to exacerbate the information wars, either by producing false videos, or by calling into question real ones. People are already all too eager to believe conspiracy theories and fake news as it is, and the insurgence of these faked videos could be created to back up these bogus theories.

Others worry that the existence of deepfake videos could cast doubt on actual, factual videos. Thomas Rid, a professor of strategic studies at Johns Hopkins University says that deepfakes could lead to “deep denials” – in other words, “the ability to dispute previously uncontested evidence.”

While there have not yet been any publicly documented cases of attempts to influence politics with deepfake videos, people have already been harmed by the faked videos.

Women have been specifically targeted. Celebrities and civilians alike have reported that their likeness has been used to create fake sex videos.

Deepfakes prove that just because you can achieve an impressive technological feat doesn’t always mean you should.

Continue Reading

Our Great Partners

American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!