Connect with us

Social Media

Fallout from Facebook’s shady program spying on children

(SOCIAL MEDIA) Facebook is barely even trying to be sneaky anymore, paying children to allow them to spy. Shameless.

Published

on

facebook

Facebook recently landed in hot (boiling) water when it was uncovered that Facebook has been paying teens to install a “research” VPN on their devices that would allow the tech giant to see all of the teen’s cellular and web usage, for about $20 worth of gift cards each month.

The participants were largely recruited into the program as a result of targeted Snapchat and Instragram ads, and offered participants additional incentives to refer friends into the program too.

The purpose of this Big Brother program was not to empower young minds with technological innovation, but to use all of this data to track Facebook’s competitors, keep track of emerging trends, and otherwise be creepin’ on the kids. The program reportedly went so far as to ask users to share screenshots of their Amazon order history pages.  

According to the report: “Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity.”

Oh, and if the privacy concerns of this whole program weren’t terrifying enough; it has been going on since 2016.

Almost immediately after the news broke, Apple banned Facebook’s Research VPN and shut down the iOs version of the Research app, before Facebook could suspend the program voluntarily. Apple also released a statement condemning the program and Facebook’s shady choice to hide it in the iOs Developer certificate rather than the App Store (where apps that collect personal data have been banned since last summer).

This entire debacle highlights the murky borders of online consent when children and teens are involved. Not only are teens less likely to be aware of the risks of sharing their data, but also often parental “consent” is not real. There’s no verification of parental consent; if a teen checks a box in an online form saying that they are their parent—the website is none the wiser. The same is true for many age verification processes.

If you are a real parent reading this and want to check to make sure that your teen’s not selling their personal data for pennies, you LifeHacker has instructions to help you identify whether or not they are in the program (and get them out of it!).

This entire debacle is a nice reminder that large tech companies may offer innovative services, high salaries to employees, and strange new ways of keeping in touch with people we’d probably forgotten by now, but the product is not the social networks they build.

The product that Facebook, Google, Amazon, and other giants are really interested in is data – we’ve been reporting that for over a decade now. Their treatment of people that may not even be able to consent to sharing their data highlights this narrow goal. If you a not a person, but rather a collection of market insights, what does your age matter? It’s just another variable for the algorithms (robots).

The upside of this entire debacle is that many parents previously unaware of this type of program are now talking to their children about this topic.

Further, this gives politicians more tangible evidence of why media companies like Facebook should never get a free pass for bad behavior.

AprilJo Murphy is a Staff Writer at The American Genius and holds a PhD in English and Creative Writing from the University of North Texas. She is a writer, editor, and sometimes teacher based in Austin, TX who enjoys getting outdoors with her handsome dog, Roan.

Social Media

Can you legally monitor your employees’ online activities? Kinda

(SOCIAL MEDIA) Are they ways you are monitoring your employees online even legal? Did you know there are illegal methods? Yep.

Published

on

remote workers

Edward Snowden’s infamous info leak in 2013 brought to light the scope of surveillance measures, raising questions about legality of monitoring tactics. However, the breach also opened up broader discussion on best practices for protecting sensitive data.

No company wants to end up with a data breach situation on their hands, but businesses need to be careful when implementing monitoring systems to prevent data loss.

Monitoring your employee’s activity online can be a crucial part of safeguarding proprietary data. However, many legal risks are present when implementing data loss prevention (DLP) methods.

DLP tools like keystroke logging, natural language processing, and network traffic monitoring are all subject to federal and state privacy laws. Before putting any DLP solutions in place, companies need to assess privacy impact and legal risks.

First, identify your monitoring needs. Different laws apply to tracking data in transit versus data at rest. Data in transit is any data moving through a network, like sending an email. The Electronic Communications Privacy Act (ECPA) requires consent for tracking any data in transit.

Data at rest is anything relatively immobile, like information stored in a database or archives. Collecting data at rest can fall under the Stored Communications Act (SCA), which typically prohibits unauthorized access or disclosure of electronic communications.

While the SCA does not usually prevent employers from accessing their own systems, monitoring things like Gmail accounts could get messy without proper authorization.

Who you’re tracking matters as well regarding consent and prior notification. If you’re just monitoring your own employees, you may run into disclosure issues. Some states, like Delaware and Connecticut, prohibit employee monitoring without prior notice.

The ECPA also generally prohibits tracking electronic communication, but exceptions are granted for legitimate business purposes so long as consent is obtained.

Monitoring third party communications can get tricky with wiretapping laws. In California and Illinois, all parties must be notified of any tracking. This can involve disclosures on email signatures from outbound employee emails, or a broad notification on the company’s site.

Implied consent comes from third parties continuing communication even with disclaimers present.

If you’re wanting to install DLP software on personal devices used for work, like a company cellphone, you could face a series of fines for not gaining authorization. Incorrect implementation may fall under spyware and computer crime laws.

With any DLP tools and data monitoring, notification and consent are crucial. When planning monitoring, first assess what your privacy needs are, then identify potential risks of implementing any tracking programs.

Define who, where, and why DLP software will apply, and make sure every employee understands the need for tracking. Include consent in employee onboarding, and keep employees updated with changes to your monitoring tactics.

Protecting your company’s data is important, but make sure you’re not unintentionally bending privacy laws with your data loss prevention methods. Regularly check up on your approaches to make sure everything is in compliance with monitoring laws.

Continue Reading

Social Media

Should social media continue to self-regulate, or should Uncle Sam step in?

(MEDIA) Should social media platforms be allowed to continue to regulate themselves or should governments continue to step in? Is it an urgency, or a slippery slope?

Published

on

broadband adoption

Last week, Instagram, Whatsapp, and Facebook suffered a massive outage around the world that lasted for most of the day. In typical Internet fashion, frustrated users took to Twitter to vent their feelings. A common thread throughout all of the dumpster fire gifs was the implication that these social media platforms were a necessary outlet for connecting people with information—as well as being an emotional outlet for whatever they felt like they needed to share.

It’s this dual nature of social media, both as a vessel for content that people consume, as well as a product that they share personal data with (for followers, but also knowing that the data is collected and analyzed by the companies) that confuses people as to what these things actually are. Is social media a form of innovative technology, or is it more about the content, is it media? Is it both?

Well, the answer depends on how you want to approach it.

Although users may say that content is what keeps them using the apps, the companies themselves purport that the apps are technology. We’ve discussed this distinction before, and how it means that the social media giants get to skirt around having more stringent regulation. 

But, as many point out, if the technology is dependent on content for its purpose (and the companies’ profit): where does the line between personal information and corporate data mining lie?

Should social media outlets known for their platform being used to perpetuate “fake news” and disinformation be held to higher standards in ensuring that the information they spread is accurate and non-threatening?

As it currently stands, social media companies don’t have any legislative oversight—they operate almost exclusively in a state of self-regulation.  This is because they are classified as technology companies rather than media outlets.

This past summer, Senator Mark Warner from Virginia suggested that social media, such as Twitter, Facebook, and Instagram, needed regulation in a widely circulated white paper. Highlighting the scandal by Cambridge Analytica which rocked the polls and has underscored the potential of social media to sway real-life policy by way of propaganda,

Warner suggested that lawmakers target three areas for regulation: fighting politically oriented misinformation, protecting user privacy, and promoting competition among Internet markets that will make long-term use of the data collected from users.

Warner isn’t the only person who thinks that social media’s current state of self-regulation unmoored existence is a bit of a problem, but the problem only comes from what would be considered a user-error: The people using social media have forgotten that they are the product, not the apps.

Technically, many users of social media have signed their privacy away by clicking “accept” on terms and conditions they haven’t fully read.* The issues of being able to determine whether or not a meme is Russian propaganda isn’t a glitch in code, it’s a way to exploit media illiteracy and confirmation bias.

So, how can you regulate human behavior? Is it on the tech companies to try and be better than the tendencies of the people who use them? Ideally they wouldn’t have to be told not to take advantage of people, but when people are willingly signing up to be taken advantage of, who do you target?

It’s a murky question, and it’s only going to get trickier to solve the more social media embeds itself into our culture.

*Yes, I’m on social media and I blindly clicked it too! He who is without sin, etc.

Continue Reading

Social Media

Deepfakes can destroy any reputation, company, or country

(MEDIA) Deepfakes have been around for a few years now, but they’re being crafted for nefarious purposes beyond the original porn and humor uses.

Published

on

deepfakes

Deepfakes — a technology originally used by Reddit perverts who wanted to superimpose their favorite actresses’ faces onto the bodies of porn stars – have come a long way since the original Reddit group was banned.

Deepfakes use artificial intelligence (AI) to create bogus videos by analyzing facial expressions to replace one person’s face and/or voice with another’s.

Using computer technology to synthesize videos isn’t exactly new.

Remember in Forrest Gump, how Tom Hanks kept popping up in the background of footage of important historical events, and got a laugh from President Kennedy? It wasn’t created using AI, but the end result is the same. In other cases, such technology has been used to complete a film when an actor dies during production.

The difference between these examples and that latest deepfake technology is a question of ease and access.

Historically, these altered videos have required a lot of money, patience, and skill. But as computer intelligence has advanced, so too has deepfake technology.

Now the computer does the work instead of the human, making it relatively fast and easy to create a deepfake video. In fact, Stanford created a technology using a standard PC and web cam, as I reported in 2016.

Nowadays, your average Joe can access open source deepfake apps for free. All you need is some images or video of your victim.

While the technology has mostly been used for fun – such as superimposing Nicolas Cage into classic films – deepfakes could and have been used for nefarious purposes.

There is growing concern that deepfakes could be used for political disruption, for example, to smear a politician’s reputation or influence elections.

Legislators in the House and Senate have requested that intelligence agencies report on the issue. The Department of Defense has already commissioned researchers to teach computers to detect deepfakes.

One promising technology developed at the University of Albany analyzes blinking to detect deep fakes, as subjects in the faked videos usually do not blink as often as real humans do. Ironically, in order to teach computers how to detect them, researchers must first create many deepfake videos. It seems that deepfake creators and detectors are locked in a sort of technological arms race.

The falsified videos have the potential to exacerbate the information wars, either by producing false videos, or by calling into question real ones. People are already all too eager to believe conspiracy theories and fake news as it is, and the insurgence of these faked videos could be created to back up these bogus theories.

Others worry that the existence of deepfake videos could cast doubt on actual, factual videos. Thomas Rid, a professor of strategic studies at Johns Hopkins University says that deepfakes could lead to “deep denials” – in other words, “the ability to dispute previously uncontested evidence.”

While there have not yet been any publicly documented cases of attempts to influence politics with deepfake videos, people have already been harmed by the faked videos.

Women have been specifically targeted. Celebrities and civilians alike have reported that their likeness has been used to create fake sex videos.

Deepfakes prove that just because you can achieve an impressive technological feat doesn’t always mean you should.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!