Connect with us

Social Media

Study ranks social networks for their impact on youths’ mental health

(SOCIAL MEDIA) A recent study shows the impacts that various social mediums play on adolescents and kids mental health. Not surprisingly, there are a lot of negatives.

Published

on

facebook security mental health

Social media expectations

It’s no secret that social media puts a lot of pressure on everyone who uses it – be beautiful, be exciting, be perfect.

bar
And a new report by the UK-based Royal Society for Public Health found that of all the social media apps out there, Instagram does the most damage to young people’s mental health.

Status of Mind

The study, dubbed #StatusofMind, surveyed a group of nearly 1,500 people between the ages of 14 and 24, aiming to shed some light on how popular social platforms affect things like anxiety, depression, body image, and self-identity.

Major platforms like Instagram, Snapchat, Facebook, and Twitter all turned out to have an overall negative effect on mental health in that demographic, but Instagram was the worst – especially for young women.

The uber-visual app encourages women to “compare themselves against unrealistic, largely curated, filtered and Photoshopped versions of reality,” according to report author Matt Keracher.

An anonymous female survey respondent drove the point home: “”Instagram easily makes girls and women feel as if their bodies aren’t good enough as people add filters and edit their pictures in order for them to look ‘perfect.’”

The dilemma

The word ‘unattainable’ is thrown around a lot – as in, the standards on Instagram are unattainable. But it doesn’t feel like it when you see plenty of users apparently achieving that ‘unattainable’ perfection. When you’ve been scrolling through enviable pics for a while, it’s easy to forget how curated and filtered they are.

Instead, it starts to feel like both a personal connection to someone’s life, and an alienating, impossible standard.

To combat this creeping acceptance of Instagram as reality, the Royal Society for Public Health calls for warnings or labels on all posts, across platforms, that have been manipulated digitally, whether it’s Photoshop or a simple filter.

“We’re not asking these platforms to ban Photoshop or filters but rather to let people know when images have been altered so that users don’t take the images on face value as real,” said Keracher. With a reminder on most every post, though, no matter what the label says or looks like, it could soon start to blend into the background.

The report also suggested that pop-up warnings should appear when users have been on a platform for more than two hours.

Respondents who spend more than two hours a day on social media sites were more likely to report poor mental health.

“Platforms that are supposed to help young people connect with each other may actually be fueling a mental health crisis,” said Shirley Cramer, Royal Society Chief Executive, in the report.

Making strides

While 70 percent of the surveyed young people are in favor of the usage warning, it probably isn’t so simple. Social media is addictive, just like cigarettes and alcohol. Your brain craves the validation of likes, the feeling of being included. A pop-up could easily be ignored, ineffective.

There may not be a perfect answer yet, but the intentions of the study are good.

“We really want to equip young people with the tools and the knowledge to be able to navigate social media platforms not only in a positive way but in a way that promotes good mental health,” added Keracher.

here to stay

One thing is certain: social media isn’t going anywhere. Though there are demonstrated negative affects on mental health, there are also plenty of benefits of using social media – many use the apps as outlets for self-expression, and for forging connections with new people.

Professional YouTuber (what a job!) Laci Green is a strong proponent of mental health education.

“Because platforms like Instagram and Facebook present highly curated versions of the people we know and the world around us. It is easy for our perspective of reality to become distorted,” said Green. “Socializing from behind a screen can also be uniquely isolating, obscuring mental health challenges even more than usual.”

Notably, YouTube was the only platform in the study that was found to positively impact young people’s mental health. It’s harder to filter and curate a whole video than a split second snapshot.

Equip kids, don’t just shelter them

Ultimately, education is a promising route towards promoting good mental health, says the UK’s Royal College of Psychiatrists president, Sir Simon Wessely. “I am sure that social media plays a role in unhappiness, but it has as many benefits as it does negatives,” said Wessely.

“We need to teach children how to cope with all aspects of social media — good and bad — to prepare them for an increasingly digitized world. There is real danger in blaming the medium for the message.”

#socialmedia

Staff Writer, Natalie Bradford earned her B.A. in English from Cornell University and spends a lot of time convincing herself not to bake MORE brownies. She enjoys cats, cocktails, and good films - preferably together. She is currently working on a collection of short stories.

Social Media

Can you legally monitor your employees’ online activities? Kinda

(SOCIAL MEDIA) Are they ways you are monitoring your employees online even legal? Did you know there are illegal methods? Yep.

Published

on

remote workers

Edward Snowden’s infamous info leak in 2013 brought to light the scope of surveillance measures, raising questions about legality of monitoring tactics. However, the breach also opened up broader discussion on best practices for protecting sensitive data.

No company wants to end up with a data breach situation on their hands, but businesses need to be careful when implementing monitoring systems to prevent data loss.

Monitoring your employee’s activity online can be a crucial part of safeguarding proprietary data. However, many legal risks are present when implementing data loss prevention (DLP) methods.

DLP tools like keystroke logging, natural language processing, and network traffic monitoring are all subject to federal and state privacy laws. Before putting any DLP solutions in place, companies need to assess privacy impact and legal risks.

First, identify your monitoring needs. Different laws apply to tracking data in transit versus data at rest. Data in transit is any data moving through a network, like sending an email. The Electronic Communications Privacy Act (ECPA) requires consent for tracking any data in transit.

Data at rest is anything relatively immobile, like information stored in a database or archives. Collecting data at rest can fall under the Stored Communications Act (SCA), which typically prohibits unauthorized access or disclosure of electronic communications.

While the SCA does not usually prevent employers from accessing their own systems, monitoring things like Gmail accounts could get messy without proper authorization.

Who you’re tracking matters as well regarding consent and prior notification. If you’re just monitoring your own employees, you may run into disclosure issues. Some states, like Delaware and Connecticut, prohibit employee monitoring without prior notice.

The ECPA also generally prohibits tracking electronic communication, but exceptions are granted for legitimate business purposes so long as consent is obtained.

Monitoring third party communications can get tricky with wiretapping laws. In California and Illinois, all parties must be notified of any tracking. This can involve disclosures on email signatures from outbound employee emails, or a broad notification on the company’s site.

Implied consent comes from third parties continuing communication even with disclaimers present.

If you’re wanting to install DLP software on personal devices used for work, like a company cellphone, you could face a series of fines for not gaining authorization. Incorrect implementation may fall under spyware and computer crime laws.

With any DLP tools and data monitoring, notification and consent are crucial. When planning monitoring, first assess what your privacy needs are, then identify potential risks of implementing any tracking programs.

Define who, where, and why DLP software will apply, and make sure every employee understands the need for tracking. Include consent in employee onboarding, and keep employees updated with changes to your monitoring tactics.

Protecting your company’s data is important, but make sure you’re not unintentionally bending privacy laws with your data loss prevention methods. Regularly check up on your approaches to make sure everything is in compliance with monitoring laws.

Continue Reading

Social Media

Should social media continue to self-regulate, or should Uncle Sam step in?

(MEDIA) Should social media platforms be allowed to continue to regulate themselves or should governments continue to step in? Is it an urgency, or a slippery slope?

Published

on

broadband adoption

Last week, Instagram, Whatsapp, and Facebook suffered a massive outage around the world that lasted for most of the day. In typical Internet fashion, frustrated users took to Twitter to vent their feelings. A common thread throughout all of the dumpster fire gifs was the implication that these social media platforms were a necessary outlet for connecting people with information—as well as being an emotional outlet for whatever they felt like they needed to share.

It’s this dual nature of social media, both as a vessel for content that people consume, as well as a product that they share personal data with (for followers, but also knowing that the data is collected and analyzed by the companies) that confuses people as to what these things actually are. Is social media a form of innovative technology, or is it more about the content, is it media? Is it both?

Well, the answer depends on how you want to approach it.

Although users may say that content is what keeps them using the apps, the companies themselves purport that the apps are technology. We’ve discussed this distinction before, and how it means that the social media giants get to skirt around having more stringent regulation. 

But, as many point out, if the technology is dependent on content for its purpose (and the companies’ profit): where does the line between personal information and corporate data mining lie?

Should social media outlets known for their platform being used to perpetuate “fake news” and disinformation be held to higher standards in ensuring that the information they spread is accurate and non-threatening?

As it currently stands, social media companies don’t have any legislative oversight—they operate almost exclusively in a state of self-regulation.  This is because they are classified as technology companies rather than media outlets.

This past summer, Senator Mark Warner from Virginia suggested that social media, such as Twitter, Facebook, and Instagram, needed regulation in a widely circulated white paper. Highlighting the scandal by Cambridge Analytica which rocked the polls and has underscored the potential of social media to sway real-life policy by way of propaganda,

Warner suggested that lawmakers target three areas for regulation: fighting politically oriented misinformation, protecting user privacy, and promoting competition among Internet markets that will make long-term use of the data collected from users.

Warner isn’t the only person who thinks that social media’s current state of self-regulation unmoored existence is a bit of a problem, but the problem only comes from what would be considered a user-error: The people using social media have forgotten that they are the product, not the apps.

Technically, many users of social media have signed their privacy away by clicking “accept” on terms and conditions they haven’t fully read.* The issues of being able to determine whether or not a meme is Russian propaganda isn’t a glitch in code, it’s a way to exploit media illiteracy and confirmation bias.

So, how can you regulate human behavior? Is it on the tech companies to try and be better than the tendencies of the people who use them? Ideally they wouldn’t have to be told not to take advantage of people, but when people are willingly signing up to be taken advantage of, who do you target?

It’s a murky question, and it’s only going to get trickier to solve the more social media embeds itself into our culture.

*Yes, I’m on social media and I blindly clicked it too! He who is without sin, etc.

Continue Reading

Social Media

Deepfakes can destroy any reputation, company, or country

(MEDIA) Deepfakes have been around for a few years now, but they’re being crafted for nefarious purposes beyond the original porn and humor uses.

Published

on

deepfakes

Deepfakes — a technology originally used by Reddit perverts who wanted to superimpose their favorite actresses’ faces onto the bodies of porn stars – have come a long way since the original Reddit group was banned.

Deepfakes use artificial intelligence (AI) to create bogus videos by analyzing facial expressions to replace one person’s face and/or voice with another’s.

Using computer technology to synthesize videos isn’t exactly new.

Remember in Forrest Gump, how Tom Hanks kept popping up in the background of footage of important historical events, and got a laugh from President Kennedy? It wasn’t created using AI, but the end result is the same. In other cases, such technology has been used to complete a film when an actor dies during production.

The difference between these examples and that latest deepfake technology is a question of ease and access.

Historically, these altered videos have required a lot of money, patience, and skill. But as computer intelligence has advanced, so too has deepfake technology.

Now the computer does the work instead of the human, making it relatively fast and easy to create a deepfake video. In fact, Stanford created a technology using a standard PC and web cam, as I reported in 2016.

Nowadays, your average Joe can access open source deepfake apps for free. All you need is some images or video of your victim.

While the technology has mostly been used for fun – such as superimposing Nicolas Cage into classic films – deepfakes could and have been used for nefarious purposes.

There is growing concern that deepfakes could be used for political disruption, for example, to smear a politician’s reputation or influence elections.

Legislators in the House and Senate have requested that intelligence agencies report on the issue. The Department of Defense has already commissioned researchers to teach computers to detect deepfakes.

One promising technology developed at the University of Albany analyzes blinking to detect deep fakes, as subjects in the faked videos usually do not blink as often as real humans do. Ironically, in order to teach computers how to detect them, researchers must first create many deepfake videos. It seems that deepfake creators and detectors are locked in a sort of technological arms race.

The falsified videos have the potential to exacerbate the information wars, either by producing false videos, or by calling into question real ones. People are already all too eager to believe conspiracy theories and fake news as it is, and the insurgence of these faked videos could be created to back up these bogus theories.

Others worry that the existence of deepfake videos could cast doubt on actual, factual videos. Thomas Rid, a professor of strategic studies at Johns Hopkins University says that deepfakes could lead to “deep denials” – in other words, “the ability to dispute previously uncontested evidence.”

While there have not yet been any publicly documented cases of attempts to influence politics with deepfake videos, people have already been harmed by the faked videos.

Women have been specifically targeted. Celebrities and civilians alike have reported that their likeness has been used to create fake sex videos.

Deepfakes prove that just because you can achieve an impressive technological feat doesn’t always mean you should.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!