Connect with us

Social Media

Can you legally monitor your employees’ online activities? Kinda

(SOCIAL MEDIA) Are they ways you are monitoring your employees online even legal? Did you know there are illegal methods? Yep.

Published

on

remote workers

Edward Snowden’s infamous info leak in 2013 brought to light the scope of surveillance measures, raising questions about legality of monitoring tactics. However, the breach also opened up broader discussion on best practices for protecting sensitive data.

No company wants to end up with a data breach situation on their hands, but businesses need to be careful when implementing monitoring systems to prevent data loss.

Monitoring your employee’s activity online can be a crucial part of safeguarding proprietary data. However, many legal risks are present when implementing data loss prevention (DLP) methods.

DLP tools like keystroke logging, natural language processing, and network traffic monitoring are all subject to federal and state privacy laws. Before putting any DLP solutions in place, companies need to assess privacy impact and legal risks.

First, identify your monitoring needs. Different laws apply to tracking data in transit versus data at rest. Data in transit is any data moving through a network, like sending an email. The Electronic Communications Privacy Act (ECPA) requires consent for tracking any data in transit.

Data at rest is anything relatively immobile, like information stored in a database or archives. Collecting data at rest can fall under the Stored Communications Act (SCA), which typically prohibits unauthorized access or disclosure of electronic communications.

While the SCA does not usually prevent employers from accessing their own systems, monitoring things like Gmail accounts could get messy without proper authorization.

Who you’re tracking matters as well regarding consent and prior notification. If you’re just monitoring your own employees, you may run into disclosure issues. Some states, like Delaware and Connecticut, prohibit employee monitoring without prior notice.

The ECPA also generally prohibits tracking electronic communication, but exceptions are granted for legitimate business purposes so long as consent is obtained.

Monitoring third party communications can get tricky with wiretapping laws. In California and Illinois, all parties must be notified of any tracking. This can involve disclosures on email signatures from outbound employee emails, or a broad notification on the company’s site.

Implied consent comes from third parties continuing communication even with disclaimers present.

If you’re wanting to install DLP software on personal devices used for work, like a company cellphone, you could face a series of fines for not gaining authorization. Incorrect implementation may fall under spyware and computer crime laws.

With any DLP tools and data monitoring, notification and consent are crucial. When planning monitoring, first assess what your privacy needs are, then identify potential risks of implementing any tracking programs.

Define who, where, and why DLP software will apply, and make sure every employee understands the need for tracking. Include consent in employee onboarding, and keep employees updated with changes to your monitoring tactics.

Protecting your company’s data is important, but make sure you’re not unintentionally bending privacy laws with your data loss prevention methods. Regularly check up on your approaches to make sure everything is in compliance with monitoring laws.

Lindsay is an editor for The American Genius with a Communication Studies degree and English minor from Southwestern University. Lindsay is interested in social interactions across and through various media, particularly television, and will gladly hyper-analyze cartoons and comics with anyone, cats included.

Social Media

MeWe – the social network for your inner Ron Swanson

MeWe, a new social media site, seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

Published

on

mute social media

Let’s face it: Facebook is kind of creepy. Between facial recognition technology, demanding your real name, and mining your accounts for data, social media is becoming increasingly invasive. Users have looked for alternatives to mainstream social media that genuinely value privacy, but the alternatives to Facebook have been lackluster.

MeWe is poised to change all of that, if it can muster up a network strong enough to compete with Facebook. On paper, the new social media site seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

MeWe prioritizes privacy in every aspect of the site, and in fact, users are protected by a “Privacy Bill of Rights.” MeWe does not track, mine, or share your data, and does not use facial recognition software or cookies. (In fact, you can take a survey on MeWe to estimate how many cookies are currently tracking you – apparently I have 18 cookies spying on me!)

ron swanson

You don’t have to share that “as of [DATE] my content belongs to me” status anymore.

Everything you post on MeWe belongs to you – the site does not try to claim ownership over your content – and you can download your profile in its entirety at any time. MeWe doesn’t even pester you with advertising. Instead of making money by selling your data (hence the hashtag #Not4Sale) or advertising, the site plans to profit by offering additional paid services, like extra data and bonus apps.

So what does MeWe do? Everything Facebook does, and more. You can share photos and videos, send messages or live chat. You can also attach voice messages to any of your posts, photos, or videos, and you can create Snapchat-like disappearing content.

You can also sync your profile to stash content in your personal storage cloud. Everything you post is protected, and you can fine-tune the permission controls so that you can decide exactly who gets to see your content and who doesn’t – “no creepy stalkers or strangers.”

MeWe is available for Android, iOS, desktops, and tablets.

This story was originally published in January 2016, but the social network suddenly appears to be gaining traction.

Continue Reading

Social Media

Reddit CEO says it’s impossible to police hate speech, and he’s 100% right

(SOCIAL MEDIA) Moderating speech online is a slippery slope, and Reddit’s CEO argues that it’s impossible. Here’s why censorship of hate speech is still so complicated.

Published

on

hate speech online

Reddit often gets a bad rap in the media for being a cesspool of offensive language and breeding grounds for extreme, harmful ideas. This is due in part to the company’s refusal to mediate or ban hate speech.

In fact, Reddit CEO Steve Huffman recently stated that it’s not possible for the company to moderate hate speech. Huffman noted that since hate speech can be “difficult to define,” enforcing a ban would be “a nearly impossible precedent to uphold.”

As lazy as that may sound, anyone who has operated massive online groups (as we do) knows this to be unfortunate but true.

Currently, Reddit policy prohibits “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people […or] that glorifies or encourages the abuse of animals.”

Just about anything else is fair game. Sure, subreddit forums have been shut down in the past, but typically as the result of public pressure. Back in 2015, several subreddits were removed, including ones focused on mocking overweight people, transgender folks, and people of color.

However, other equally offensive subreddits didn’t get the axe. Reddit’s logic was that the company received complaints that the now retired subreddits were harassing others on and offline. Offensive posts are permitted, actual harassment is not.

Huffman previously stated, “On Reddit, the way in which we think about speech is to separate behavior from beliefs.” So posting something horribly racist won’t get flagged unless there’s evidence that users crossed the line from free speech to harassing behavior.

Drawing the line between harassment and controversial conversation is where things get tricky for moderators.

Other social media sites like Facebook, Instagram, and Twitter at least make an attempt, though. So what’s holding Reddit back?

Well, for one, moderating hate speech isn’t a clear cut task.

Right now, AI can’t fully take the reins because to truly put a stop to hate speech, there must be an understanding of both language and intent.

Since current AI isn’t quite there yet, Facebook currently employs actual people for the daunting task. The company mostly relies on overseas contractors, which can get pretty expensive (and can lack understanding of cultural contexts).

Users post millions of comments to Reddit per day, and paying real humans to sift through every potentially offensive or harassing post could break the bank.

Most agree that cost isn’t a relevant excuse, though, so Facebook is looking into buying and developing software specializing in natural language processing as an alternative solution. But right now, Reddit does not seem likely to follow in Facebook’s footsteps.

While Facebook sees itself as a place where users should feel safe and comfortable, Reddit’s stance is that all views are welcome, even potentially offensive and hateful ones.

This April in an AMA (Ask Me Anything) a user straight up asked if obvious racism and slurs are against Reddit’s rules.

Huffman responded in part, “the best defense against racism and other repugnant views both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.”

So essentially, although racism is “not welcome,” it’s also not likely to be banned unless there is associated unacceptable behavior as well.

It’s worth noting that while Reddit as a whole does not remove most hate speech, each subreddit has its own set of rules that may dictate stricter rules. The site essentially operates as an online democracy, with each subreddit “state” afforded the autonomy to enforce differing standards.

Enforcement comes down to moderators, and although some content is clearly hateful, other posts can fall into grey area.

Researches at Berkeley partnered with the Anti-Defamation League recently partnered up to create The Online Hate Index project, an AI program that identifies hate speech. While the program was surprisingly accurate in identifying hate speech, determining intensity of statements was difficult.

Plus, many of the same words are used in hate and non-hate comments. AI and human moderators struggle with defining what crosses the line into hate speech. Not all harmful posts are immediately obvious, and when a forum receives a constant influx of submissions, the volume can be overwhelming for moderators.

While it’s still worth making any effort to foster healthy online communities, until we get a boost to AI’s language processing abilities, complete hate speech moderation may not be possible for large online groups.

Continue Reading

Social Media

Red flags to help you spot a bad social media professional

(SOCIAL MEDIA) Social Media is a growing field with everyone and their moms trying to become social media managers. Here are a few experts’ tips on seeing and avoiding the red flags of social media professionals.

Published

on

facebook digital marketing buffer managers

Social media professionals, listen up

If you’re thinking about hiring a social media professional – or are one yourself – take some tips from the experts.

bar
We asked a number of entrepreneurs specializing in marketing and social media how they separate the wheat from the chaff when it comes to social media managers, and they gave us some hints about how to spot whose social media game is all bark and no bite.

You can tell a lot from their socials

According to our experts, the first thing you should do if you’re hiring a social media professional is to check out their personal and/or professional social media pages.

Candidates with underwhelming, non-existent, out-of-date, or just plain bad social media pages should obviously get the chop.

“If they have no professional social presence themselves, that’s a big red flag,” says Chelle Honiker, executive director at the Texas Freelance Association. Another entrepreneur, Paul O’Brien of Media Tech Ventures, explains that “the only way to excel is to practice…. If you excel, why would you not be doing so on behalf of your personal brand?”

In other words, if someone can’t make their own social media appealing, how can they be expected to do so for a client?

Other taboos

These pros especially hated seeing outdated icons, infrequent posts, and automatic posts. Worse than outdated social media pages were bad social media pages. Marc Nathan of Miller Egan Molter & Nelson provided a laundry list of negative characteristics that he uses to rule out candidates, including “snarky,” “complaining, unprofessional” “too personal” “inauthentic,” and “argumentative.”

Besides eliminating candidates with poor social media presence, several of these pros also really hated gimmicky job titles such as “guru,” “whiz,” “ninja,” “superhero,” or “magician.”

They were especially turned off by candidates who called themselves “experts” without any proof of their success.

Jeff Fryer of ARM dislikes pros who call themselves experts because, he says “The top leaders in this field will be the first to tell you that they’re always learning– I know I am!” Steer clear of candidates who talk themselves up with ridiculous titles and who can’t provide solid evidence of their expertise.

How do you prove it?

According to our experts, some of them don’t even try. To candidates who say “’Social media can’t be measured,’” Fryer answer “yes it can[. L]earn how to be a marketer.” Beth Carpenter, CEO of Violet Hour Social Marketing, complains that many candidates “Can’t talk about ROI (return on investment),” arguing that a good social media pro should be able to show “how social contributes to overall business success.” Good social media pros should show their value in both quantitative and qualitative terms.

While our experts wanted to see numerical evidence of social media success, they were also unimpressed with “vanity metrics” such as numbers of followers.

Many poo-pooed the use of followers alone as an indicator of success, with Tinu Abayomi-Paul of Leveraged Promotion joking that “a trained monkey or spambot” can gather 1,000 followers.

Claims of expertise or success should also be backed up by references and experience in relevant fields.

Several entrepreneurs said that they had come across social media managers without “any experience in critical fields: marketing, advertising, strategic planning and/or writing,” to quote Nancy Schirm of Austin Visuals. She explains that it’s not enough to know how to “handle the technology.” Real social media experts must cultivate “instinct borne from actual experience in persuasive communication.”

Freshen up

So, if you’re an aspiring social media manager, go clean up those pages, get some references, and figure out solid metrics for demonstrating your success.

And if you’re hiring a social media manager, watch out for these red flags to cull your candidate pool.

#RedFlag

Continue Reading
Advertisement

Our Great Parnters

The
American Genius
news neatly in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Emerging Stories