Connect with us

Social Media

Can you legally monitor your employees’ online activities? Kinda

(SOCIAL MEDIA) Are they ways you are monitoring your employees online even legal? Did you know there are illegal methods? Yep.

Published

on

remote workers

Edward Snowden’s infamous info leak in 2013 brought to light the scope of surveillance measures, raising questions about legality of monitoring tactics. However, the breach also opened up broader discussion on best practices for protecting sensitive data.

No company wants to end up with a data breach situation on their hands, but businesses need to be careful when implementing monitoring systems to prevent data loss.

Monitoring your employee’s activity online can be a crucial part of safeguarding proprietary data. However, many legal risks are present when implementing data loss prevention (DLP) methods.

DLP tools like keystroke logging, natural language processing, and network traffic monitoring are all subject to federal and state privacy laws. Before putting any DLP solutions in place, companies need to assess privacy impact and legal risks.

First, identify your monitoring needs. Different laws apply to tracking data in transit versus data at rest. Data in transit is any data moving through a network, like sending an email. The Electronic Communications Privacy Act (ECPA) requires consent for tracking any data in transit.

Data at rest is anything relatively immobile, like information stored in a database or archives. Collecting data at rest can fall under the Stored Communications Act (SCA), which typically prohibits unauthorized access or disclosure of electronic communications.

While the SCA does not usually prevent employers from accessing their own systems, monitoring things like Gmail accounts could get messy without proper authorization.

Who you’re tracking matters as well regarding consent and prior notification. If you’re just monitoring your own employees, you may run into disclosure issues. Some states, like Delaware and Connecticut, prohibit employee monitoring without prior notice.

The ECPA also generally prohibits tracking electronic communication, but exceptions are granted for legitimate business purposes so long as consent is obtained.

Monitoring third party communications can get tricky with wiretapping laws. In California and Illinois, all parties must be notified of any tracking. This can involve disclosures on email signatures from outbound employee emails, or a broad notification on the company’s site.

Implied consent comes from third parties continuing communication even with disclaimers present.

If you’re wanting to install DLP software on personal devices used for work, like a company cellphone, you could face a series of fines for not gaining authorization. Incorrect implementation may fall under spyware and computer crime laws.

With any DLP tools and data monitoring, notification and consent are crucial. When planning monitoring, first assess what your privacy needs are, then identify potential risks of implementing any tracking programs.

Define who, where, and why DLP software will apply, and make sure every employee understands the need for tracking. Include consent in employee onboarding, and keep employees updated with changes to your monitoring tactics.

Protecting your company’s data is important, but make sure you’re not unintentionally bending privacy laws with your data loss prevention methods. Regularly check up on your approaches to make sure everything is in compliance with monitoring laws.

Lindsay is an editor for The American Genius with a Communication Studies degree and English minor from Southwestern University. Lindsay is interested in social interactions across and through various media, particularly television, and will gladly hyper-analyze cartoons and comics with anyone, cats included.

Social Media

Twitter’s crackdown on deepfakes could insure the company’s survival

(SOCIAL MEDIA) Twitter is cracking down on manipulated and misleading content—will other social media platforms do the same?

Published

on

deepfakes

Twitter isn’t renowned for things that other social media platforms lay claim to—you know, setting trends, turning a profit, staying relevant—but the oft-forgotten site finally has something to brag about: cracking down on deepfakes.

Oh, and they also finally pulled out a profit this year, but that’s beside the point.

Deepfakes, for those who don’t know, are videos which have been manipulated to portray people—often celebrities or politicians—saying and doing things that they never actually said or did. The problem with deepfakes is that, unlike your average Photoshop job, they are extremely convincing; in some cases, their validity may even be impossible to determine.

Unfortunately, deepfakes have been used for a variety of unsavory purposes ranging from moderate humiliation to full-blown revenge porn; since ruling them out is difficult, the long-term implications of this type of video manipulation are pretty terrifying.

You wouldn’t be wrong for thinking that all social media platforms should address deepfakes as a serious issue, but the fact remains that many platforms have taken decidedly lackadaisical approaches. Facebook, for example, continues to allow content from producers who have histories of video manipulation, the dissemination of misleading information, and flat-out false advertising—something that has been generally glossed over despite being heavily addressed by media.

This is where Twitter is actually ahead of the curve. Where other social media services have failed in the war against “fake news”, Twitter hopes to succeed by aggressively labelling and, in some cases, censoring media that has been determined to be manipulated or misleading. While the content itself will stay posted in most cases, a warning will appear near it to signify its lack of credibility.

Twitter will also remove manipulated content that is deemed harmful or malicious, but the real beauty of their move is that it allows people to witness first-hand a company or service purposefully misleading them. By keeping the problematic content available while making users aware of its flaws, Twitter is increasing awareness and skepticism about viral content.

Of course, there is room to criticize Twitter’s approach; for example, some will point to their act of leaving deepfakes posted as not doing enough, while others will probably address the tricky business of identifying deepfakes to begin with. Luckily, Twitter’s policy isn’t set in stone just yet—from now until November 27th, you can take a survey to leave feedback on how Twitter should address these issues going forward.

As Twitter’s policy develops and goes into place, it will be interesting to see which social media platforms follow suit.

Continue Reading

Social Media

This LinkedIn graphic shows you where your profile is lacking

(SOCIAL MEDIA) LinkedIn has the ability to insure your visibilty, and this new infographic breaks down where you should put the most effort

Published

on

LinkedIn

LinkedIn is a must-have in the professional world. However, this social media platform can be incredibly overwhelming as there are a lot of moving pieces.

Luckily, there is a fancy graphic that details everything you need to know to create the perfect LinkedIn profile. Let’s dive in!

As we know, it is important to use your real name and an appropriate headshot. A banner photo that fits your personal brand (e.g. fits the theme of your profession/industry) is a good idea to add.

Adding your location and a detailed list of work-related projects are both underutilized, yet key pieces of information that people will look for. Other key pieces come in the form of recommendations; connections aren’t just about numbers, endorse them and hopefully they will return the favor!

Fill in every and all sections that you can, and re-read for any errors (get a second set of eyes if there’s one available). Use the profile strength meter to get a second option on your profile and find out what sections could use a little more help.

There are some settings you can enable to get the most out of LinkedIn. Turn on “career interests” to let recruiters know that you are open to job offers, turn on “career advice” to participate in an advice platform that helps you connect with other leaders in your field, turn your profile privacy off from private in order to see who is viewing your profile.

The infographic also offers some stats and words to avoid. Let’s start with stats: 65 percent of employers want to see relevant work experience, 91 percent of employers prefer that candidates have work experience, and 68 percent of LinkedIn members use the site to reconnect with past colleagues.

Now, let’s talk vocab; the infographic urges users to avoid the following words: specialized, experienced, skilled, leadership, passionate, expert, motivated, creative, strategic, focused.

That was educational, huh? Speaking of education – be sure to list your highest level of academia. People who list their education appear in searches up to 17 times more often than those who do not. And, much like when you applied to college, your past education wasn’t all that you should have included – certificates (and licenses) and volunteer work help set you apart from the rest.

Don’t be afraid to ask your connections, colleagues, etc. for recommendations. And, don’t be afraid to list your accomplishments.

Finally, users with complete profiles are 40 times more likely to receive opportunities through LinkedIn. You’re already using the site, right? Use it to your advantage! Finish your profile by completing the all-star rating checklist: industry and location, skills (minimum of three), profile photo, at least 50 connections, current position (with description), two past positions, and education.

When all of this is complete, continue using LinkedIn on a daily basis. Update your profile when necessary, share content, and keep your name popping up on peoples’ timelines. (And, be sure to check out the rest of Leisure Jobs’ super helpful infographic that details other bits, like how to properly size photos!)

Continue Reading

Social Media

Deepfakes can destroy any reputation, company, or country

(MEDIA) Deepfakes have been around for a few years now, but they’re being crafted for nefarious purposes beyond the original porn and humor uses.

Published

on

deepfakes

Deepfakes — a technology originally used by Reddit perverts who wanted to superimpose their favorite actresses’ faces onto the bodies of porn stars – have come a long way since the original Reddit group was banned.

Deepfakes use artificial intelligence (AI) to create bogus videos by analyzing facial expressions to replace one person’s face and/or voice with another’s.

Using computer technology to synthesize videos isn’t exactly new.

Remember in Forrest Gump, how Tom Hanks kept popping up in the background of footage of important historical events, and got a laugh from President Kennedy? It wasn’t created using AI, but the end result is the same. In other cases, such technology has been used to complete a film when an actor dies during production.

The difference between these examples and that latest deepfake technology is a question of ease and access.

Historically, these altered videos have required a lot of money, patience, and skill. But as computer intelligence has advanced, so too has deepfake technology.

Now the computer does the work instead of the human, making it relatively fast and easy to create a deepfake video. In fact, Stanford created a technology using a standard PC and web cam, as I reported in 2016.

Nowadays, your average Joe can access open source deepfake apps for free. All you need is some images or video of your victim.

While the technology has mostly been used for fun – such as superimposing Nicolas Cage into classic films – deepfakes could and have been used for nefarious purposes.

There is growing concern that deepfakes could be used for political disruption, for example, to smear a politician’s reputation or influence elections.

Legislators in the House and Senate have requested that intelligence agencies report on the issue. The Department of Defense has already commissioned researchers to teach computers to detect deepfakes.

One promising technology developed at the University of Albany analyzes blinking to detect deep fakes, as subjects in the faked videos usually do not blink as often as real humans do. Ironically, in order to teach computers how to detect them, researchers must first create many deepfake videos. It seems that deepfake creators and detectors are locked in a sort of technological arms race.

The falsified videos have the potential to exacerbate the information wars, either by producing false videos, or by calling into question real ones. People are already all too eager to believe conspiracy theories and fake news as it is, and the insurgence of these faked videos could be created to back up these bogus theories.

Others worry that the existence of deepfake videos could cast doubt on actual, factual videos. Thomas Rid, a professor of strategic studies at Johns Hopkins University says that deepfakes could lead to “deep denials” – in other words, “the ability to dispute previously uncontested evidence.”

While there have not yet been any publicly documented cases of attempts to influence politics with deepfake videos, people have already been harmed by the faked videos.

Women have been specifically targeted. Celebrities and civilians alike have reported that their likeness has been used to create fake sex videos.

Deepfakes prove that just because you can achieve an impressive technological feat doesn’t always mean you should.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!