Connect with us

Social Media

The science behind why people are mean online – is society doomed?

(Social Media News) People are mean online, but it isn’t just trolls, it’s everyone, because our brains are hardwired to be that way. Is there hope for society?

Published

on

mean

Our brains make us mean online

We’ve all seen or participated in the bullying of celebrities by complaining online about how horrible they are, or taken brands to task for their misdeeds, and it is done with such frequency and at such a high volume, one must pause to consider whether or not our society has become more abrasive?

Social networks are filled with a level of vitriol we can’t imagine would occur if we had to ‘say it to your face.’ But we don’t have to, and that changes everything. Our brains don’t do empathy well when we aren’t face to face.

bar

Empathy is a big deal. It helps us understand how others feel so we can adapt to their needs. Empathy is what helps you keep your mouth shut when what you want to say would offend someone (and why you don’t mind opening it if that person isn’t present). Empathy is the bedrock of compassion, understanding, rapport building, friendship, and even business relationships.

How our brains are wired

We are hardwired for empathy through something in our brains called mirror neurons. These mirror neurons actually cause our brains to experience the emotions we see on the faces of others. When I smile, your brain lights up as if you are smiling. When I yawn, you yawn. When I am sad, you understand that sadness because your brain experiences it.

This is a very fast process your brain completes subconsciously. Labeled ’emotional empathy,’ it is rooted in the limbic system of our brain.

None of us like to experience sadness. We avoid it. We want to stop feeling it. These mirror neurons make our brains work for us to ensure that we play nice in the sandbox because when I make you unhappy, I have to feel unhappy. That’s good for me, for you, for the whole human race (literally).

Remove non-verbal cues that cause the mirror neuron magic and you remove the emotional empathy.

So is there any hope for social media?

Good news: there is another way for us to experience empathy. We can think using the executive center of the brain. This is called ‘cognitive empathy.’

The downside is that it’s a far more complicated, time consuming, and exhausting mental process. It’s like the difference between driving down the highway at 75 mph or using your feet because they were made for walking. Each can get you to your destination, but one is not like the other, and we give up easily when we are forced to go the more difficult route.

The more tired our brains are (in need of sleep, stressed, etc), the more likely we are to give up the long road of cognitive empathy. But that’s our only option when it comes to online communication. It’s cognitive empathy or bust – and unfortunately, we bust more often than we’d like to admit.

Is society doomed?

So what can we take from this info:

  1. People aren’t becoming more rude. This isn’t a ‘manners issue.’ It’s a brain issue. However, even if we don’t feel the consequences of our own brain suffering when we see others angry/sad/hurt by our statements, we still receive the consequences – broken relationships, reputational hits, reciprocal barbs.
  2. The more tired you are, the more likely you are to write negative tweets, send nasty emails, and post regretful comments on Facebook. If it’s negative, adopt a simple policy: If I still feel this way when I wake up, I will send this.

Our brains were wired to get along with others. It just wasn’t built for communication that was not fathomable even a decade ago. It’s time to understand what’s happening so that we can adapt to it.

Curt Steinhorst loves attention. More specifically, he loves understanding attention. How it works. Why it matters. How to get it. As someone who personally deals with ADD, he overcame the unique distractions that today’s technology creates to start a Communications Consultancy, The Promentum Group, and Speakers Bureau, Promentum Speakers, both of which he runs today. Curt’s expertise and communication style has led to more than 75 speaking engagements in the last year to organizations such as GM, Raytheon, Naval Academy, Cadillac, and World Presidents’ Organization.

Social Media

Can you legally monitor your employees’ online activities? Kinda

(SOCIAL MEDIA) Are they ways you are monitoring your employees online even legal? Did you know there are illegal methods? Yep.

Published

on

remote workers

Edward Snowden’s infamous info leak in 2013 brought to light the scope of surveillance measures, raising questions about legality of monitoring tactics. However, the breach also opened up broader discussion on best practices for protecting sensitive data.

No company wants to end up with a data breach situation on their hands, but businesses need to be careful when implementing monitoring systems to prevent data loss.

Monitoring your employee’s activity online can be a crucial part of safeguarding proprietary data. However, many legal risks are present when implementing data loss prevention (DLP) methods.

DLP tools like keystroke logging, natural language processing, and network traffic monitoring are all subject to federal and state privacy laws. Before putting any DLP solutions in place, companies need to assess privacy impact and legal risks.

First, identify your monitoring needs. Different laws apply to tracking data in transit versus data at rest. Data in transit is any data moving through a network, like sending an email. The Electronic Communications Privacy Act (ECPA) requires consent for tracking any data in transit.

Data at rest is anything relatively immobile, like information stored in a database or archives. Collecting data at rest can fall under the Stored Communications Act (SCA), which typically prohibits unauthorized access or disclosure of electronic communications.

While the SCA does not usually prevent employers from accessing their own systems, monitoring things like Gmail accounts could get messy without proper authorization.

Who you’re tracking matters as well regarding consent and prior notification. If you’re just monitoring your own employees, you may run into disclosure issues. Some states, like Delaware and Connecticut, prohibit employee monitoring without prior notice.

The ECPA also generally prohibits tracking electronic communication, but exceptions are granted for legitimate business purposes so long as consent is obtained.

Monitoring third party communications can get tricky with wiretapping laws. In California and Illinois, all parties must be notified of any tracking. This can involve disclosures on email signatures from outbound employee emails, or a broad notification on the company’s site.

Implied consent comes from third parties continuing communication even with disclaimers present.

If you’re wanting to install DLP software on personal devices used for work, like a company cellphone, you could face a series of fines for not gaining authorization. Incorrect implementation may fall under spyware and computer crime laws.

With any DLP tools and data monitoring, notification and consent are crucial. When planning monitoring, first assess what your privacy needs are, then identify potential risks of implementing any tracking programs.

Define who, where, and why DLP software will apply, and make sure every employee understands the need for tracking. Include consent in employee onboarding, and keep employees updated with changes to your monitoring tactics.

Protecting your company’s data is important, but make sure you’re not unintentionally bending privacy laws with your data loss prevention methods. Regularly check up on your approaches to make sure everything is in compliance with monitoring laws.

Continue Reading

Social Media

Should social media continue to self-regulate, or should Uncle Sam step in?

(MEDIA) Should social media platforms be allowed to continue to regulate themselves or should governments continue to step in? Is it an urgency, or a slippery slope?

Published

on

broadband adoption

Last week, Instagram, Whatsapp, and Facebook suffered a massive outage around the world that lasted for most of the day. In typical Internet fashion, frustrated users took to Twitter to vent their feelings. A common thread throughout all of the dumpster fire gifs was the implication that these social media platforms were a necessary outlet for connecting people with information—as well as being an emotional outlet for whatever they felt like they needed to share.

It’s this dual nature of social media, both as a vessel for content that people consume, as well as a product that they share personal data with (for followers, but also knowing that the data is collected and analyzed by the companies) that confuses people as to what these things actually are. Is social media a form of innovative technology, or is it more about the content, is it media? Is it both?

Well, the answer depends on how you want to approach it.

Although users may say that content is what keeps them using the apps, the companies themselves purport that the apps are technology. We’ve discussed this distinction before, and how it means that the social media giants get to skirt around having more stringent regulation. 

But, as many point out, if the technology is dependent on content for its purpose (and the companies’ profit): where does the line between personal information and corporate data mining lie?

Should social media outlets known for their platform being used to perpetuate “fake news” and disinformation be held to higher standards in ensuring that the information they spread is accurate and non-threatening?

As it currently stands, social media companies don’t have any legislative oversight—they operate almost exclusively in a state of self-regulation.  This is because they are classified as technology companies rather than media outlets.

This past summer, Senator Mark Warner from Virginia suggested that social media, such as Twitter, Facebook, and Instagram, needed regulation in a widely circulated white paper. Highlighting the scandal by Cambridge Analytica which rocked the polls and has underscored the potential of social media to sway real-life policy by way of propaganda,

Warner suggested that lawmakers target three areas for regulation: fighting politically oriented misinformation, protecting user privacy, and promoting competition among Internet markets that will make long-term use of the data collected from users.

Warner isn’t the only person who thinks that social media’s current state of self-regulation unmoored existence is a bit of a problem, but the problem only comes from what would be considered a user-error: The people using social media have forgotten that they are the product, not the apps.

Technically, many users of social media have signed their privacy away by clicking “accept” on terms and conditions they haven’t fully read.* The issues of being able to determine whether or not a meme is Russian propaganda isn’t a glitch in code, it’s a way to exploit media illiteracy and confirmation bias.

So, how can you regulate human behavior? Is it on the tech companies to try and be better than the tendencies of the people who use them? Ideally they wouldn’t have to be told not to take advantage of people, but when people are willingly signing up to be taken advantage of, who do you target?

It’s a murky question, and it’s only going to get trickier to solve the more social media embeds itself into our culture.

*Yes, I’m on social media and I blindly clicked it too! He who is without sin, etc.

Continue Reading

Social Media

Deepfakes can destroy any reputation, company, or country

(MEDIA) Deepfakes have been around for a few years now, but they’re being crafted for nefarious purposes beyond the original porn and humor uses.

Published

on

deepfakes

Deepfakes — a technology originally used by Reddit perverts who wanted to superimpose their favorite actresses’ faces onto the bodies of porn stars – have come a long way since the original Reddit group was banned.

Deepfakes use artificial intelligence (AI) to create bogus videos by analyzing facial expressions to replace one person’s face and/or voice with another’s.

Using computer technology to synthesize videos isn’t exactly new.

Remember in Forrest Gump, how Tom Hanks kept popping up in the background of footage of important historical events, and got a laugh from President Kennedy? It wasn’t created using AI, but the end result is the same. In other cases, such technology has been used to complete a film when an actor dies during production.

The difference between these examples and that latest deepfake technology is a question of ease and access.

Historically, these altered videos have required a lot of money, patience, and skill. But as computer intelligence has advanced, so too has deepfake technology.

Now the computer does the work instead of the human, making it relatively fast and easy to create a deepfake video. In fact, Stanford created a technology using a standard PC and web cam, as I reported in 2016.

Nowadays, your average Joe can access open source deepfake apps for free. All you need is some images or video of your victim.

While the technology has mostly been used for fun – such as superimposing Nicolas Cage into classic films – deepfakes could and have been used for nefarious purposes.

There is growing concern that deepfakes could be used for political disruption, for example, to smear a politician’s reputation or influence elections.

Legislators in the House and Senate have requested that intelligence agencies report on the issue. The Department of Defense has already commissioned researchers to teach computers to detect deepfakes.

One promising technology developed at the University of Albany analyzes blinking to detect deep fakes, as subjects in the faked videos usually do not blink as often as real humans do. Ironically, in order to teach computers how to detect them, researchers must first create many deepfake videos. It seems that deepfake creators and detectors are locked in a sort of technological arms race.

The falsified videos have the potential to exacerbate the information wars, either by producing false videos, or by calling into question real ones. People are already all too eager to believe conspiracy theories and fake news as it is, and the insurgence of these faked videos could be created to back up these bogus theories.

Others worry that the existence of deepfake videos could cast doubt on actual, factual videos. Thomas Rid, a professor of strategic studies at Johns Hopkins University says that deepfakes could lead to “deep denials” – in other words, “the ability to dispute previously uncontested evidence.”

While there have not yet been any publicly documented cases of attempts to influence politics with deepfake videos, people have already been harmed by the faked videos.

Women have been specifically targeted. Celebrities and civilians alike have reported that their likeness has been used to create fake sex videos.

Deepfakes prove that just because you can achieve an impressive technological feat doesn’t always mean you should.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!