Connect with us

Social Media

Facebook private status updates made public by Storify

Private status updates on Facebook by private users and in secret or private groups are never private, especially with the help of Storify.

Published

on

facebook groups

facebook groups

Private social media updates made public through Storify

Julie Pippert, Founder and Director of Artful Media Group, known speaker and communications expert shared with AGBeat how she discovered what she believes to be a flaw in the popular service, Storify, making selected private Facebook status updates from personal profiles, private and secret groups visible to anyone and completely public.

Storify is a free content curation tool wherein users can pull social elements like photos, videos, and status updates from social networks, combining them into one single embeddable widget that is perfect for bloggers and digital publishers, telling the story of an event in its entirety through social reactions. It’s a clever and popular service that brags, “streams flow, but stories last.”

Unfortunately, that has been proven true of private Facebook status updates, no matter a user’s privacy settings, as using the Storify app to grab updates immediately pulls not only the quote from the status update, but the user’s profile picture which is linked to their account, the timestamp of the original status update on Facebook, and the original link.

Below is what a user with the Storify Google Chrome Extension sees on an update I posted in a Secret Facebook Group (note the word “Storify” which is the mechanism that immediately pulls all of the aforementioned data into Storify):

storify

When published in Storify, it appears like so (embedded using the Storify code provided by the service):

This is an example taken from a Secret Facebook Group comprised of a handful of very close friends, where we talk about sensitive health issues each of us have, which would obviously be detrimental for the public to see.

Now, if you are not a member of the secret group, you cannot see anything else inside of the group or who the members are, you do not have additional access to other status updates, but my face and name are now associated with a sensitive topic that the public can see, should another group member have innocently pulled the update as they saw it in their timeline, not realizing it was from the group, or simply not thinking Storify would authorize such a move.

Storify users can only pull status updates from people they are connected with socially, but their privacy settings matter not, and they can pull in status updates from private groups to which they belong, and while none of this offers a window into those users’ accounts or into the secret groups, the Storify tool can turn private Facebook updates public, even if only one at a time.

The discovery of the ability to bypass privacy settings

Pippert discovered this bypass through what she calls a “faux pas accident” by using the Storify app, sharing a friend’s Facebook update who felt her privacy settings were as private as they could possibly get, surprising both women at how easily a private account could become public, even if it was only one status update.

“I felt so terrible about what happened that I started digging and checking,” Pippert said, “and I figured out that although anything can be copied, screen captured or otherwise shared, anyone who installs the Storify app can do it with one click, even if it is marked or otherwise set to be private.”

Pippert explains that she shared a friend’s update about Superstorm Sandy which was very heartwarming, but when she notified her friend, both were alarmed that it could be used publicly, and no matter the content, her friend did not want her name used publicly, which is often the case for executives or government employees whose contracts forbid them from commenting publicly to the press or otherwise.

No notification, reminder, or restriction

Neither Storify or Facebook offered any notification that the content was in any way restricted or private, and there is no way for users to opt out of their content being shared on Storify, even if implied via their ultra private settings on their Facebook account.

“I really like storify and it is so useful, especially with the Chrome app, for capturing content for my job and topics that matter in my work. It’s incredibly efficient,” said Pippert. But she notes that “End of day, you just have to be prepared to have some of your content used beyond in your little sphere. But the people using it have a responsibility too. What that is isn’t exactly clear in every case. We do all have to be responsible with content we put out through social media, even privately. My friend put out great content that reflected well on her. But she didn’t want her name out there publicly.”

“Storify enabled me to nearly bypass that, against her wishes,” Pippert said. “After we talked, I offered to remove her quote.”

What about private accounts on Twitter?

When a Storify app user clicks “Storify” next to a public Twitter user’s update as a means of adding that update to their Storify stream, the following appears:

storify

And when a user attempts to Storify a private user’s update, it doesn’t offer any explanation or notice that you cannot do such a thing on a private user’s account, rather it turns the screen black like so:

private twitter update

Secret Facebook Group updates no longer secret

We noticed some major differences between how Storify reacts to private Twitter updates and Facebook updates, with users being able to read Facebook status updates in a Storify stream that would otherwise be private.

If your company has a Secret Facebook Group where you collaborate, your prayer group has a Private Facebook Group where you share personal intentions, or your friends have a Secret Facebook Group to talk about their abusive husbands, all of that is private within Facebook, but Storify grabs the information, and it becomes a Storify update with all of the attached data.

Take note that the embedded status update above has actually been deleted from Facebook, yet you can still see it on Storify. That is troubling. Here is a screenshot in the event someone at either company tweaks something and it disappears.

It’s time to look at the connection between Storify and Facebook

While there is not likely any malice by Storify here, or even Facebook in how they structure data differently than Twitter, the ability to inadvertently share private information is all too easy with Storify, and Facebook, who is famous for keeping data on their servers even after users delete photos and the like. It’s not in Facebook’s interest to get rid of any data points, as their bread and butter is ad dollars based on aggregated data, and it is not in Storify’s interest to get rid of data points, as they paint an accurate picture of a user’s status update, unfiltered.

Pippert concludes, “It might ultimately be a human problem to solve: capture content from others mindfully and use it thoughtfully, with good communication. Let others know you’re using the content and make sure you are clear to friends your preference about your content being redistributed.”

This is yet another reminder that anything you say anywhere on the web, private or not, is always subject to being shared via third party apps, screenshots, or good old fashioned copy and paste, so never say something online that you wouldn’t say in public, because there really is no such thing as privacy, which is sad and unacceptable, but true.

Regardless of human behavior, the connection between Twitter and Storify proves there are ways to actually protect private information, so it is clearly time to examine the connection between Facebook and Storify.

[pl_alertbox type=”info”]

More reading:

Storify Co-Founder implies nothing on Facebook is private
[/pl_alertbox]

Lani is the Chief Operating Officer at The American Genius and has been named in the Inman 100 Most Influential Real Estate Leaders several times, co-authored a book, co-founded BASHH and Austin Digital Jobs, and is a seasoned business writer and editorialist with a penchant for the irreverent.

Continue Reading
Advertisement
19 Comments

19 Comments

  1. Scott Baradell

    January 18, 2013 at 9:23 am

    Excellent, Lani and Julie!

  2. AmyVernon

    January 18, 2013 at 9:26 am

    So glad you wrote about this and Julie tested it out. It once again shows that nothing you write online is truly private. As Julie rightfully pointed out, anyone could screenshot or otherwise share a post at any time, but it takes extra effort and would have to be done purposefully. But with the way the newsfeed is set up, you could easily Storify something that shows up in your newsfeed, not even realizing it’s not public.

    I don’t blame Storify for this – they’re using the API Facebook gives them. Facebook needs to shore this up.

    • Erika Napoletano

      January 18, 2013 at 9:39 am

      Here, here, Amy. Another Facebook privacy issue — when will these be a thing of the past?

    • Julie Pippert

      January 18, 2013 at 4:34 pm

      Yeah, Facebook needs to recognize we’re going to want to use third party apps. I don’t want Storify blocked; I do want better collaboration tat lets it be in line with FB settings.

      That’s exactly what happened — I easily Storified something from the newsfeed, not knowing it was not public.

      I learned my lesson and try to be cautious, and I still use and am a fan of Storify. I just want my confidence back in respecting privacy settings.

  3. Burt Herman

    January 18, 2013 at 11:50 am

    Thanks for the post and I very much agree with your conclusion — anything posted online in a way that others can see it could be copied, so you should think carefully what you write online. (Or even in an email, for that matter, that could also be easily copied).

    This isn’t a technology issue as much as an etiquette issue. Now that everyone has the power to easily publish to the whole world, we all need to think about how to use that power.

    • Danny Brown

      January 18, 2013 at 12:01 pm

      Surely the etiquette should be for technology API’s to respect privacy settings and be unable to let users post private group updates, no?

      • Burt Herman

        January 18, 2013 at 2:51 pm

        It’s up to you to decide what to share online, and whether to trust the people who can see what you share.

        • Danny Brown

          January 18, 2013 at 3:56 pm

          Right. And when it’s to me, I choose to be part of a Facebook Group that’s private. So, it should now be up to any technology scraping feeds to recognize and respect private settings. Maybe something for you guys and Facebook to work out…

          • Burt Herman

            January 18, 2013 at 4:20 pm

            We don’t show anything to people who can’t see it already on Facebook. Only other people in that group can see it, so it’s up to you whether you trust them not to share what you post more widely.

          • Danny Brown

            January 18, 2013 at 4:26 pm

            You’re missing the point here, Burt – you are showing it to people who aren’t part of that private Facebook group, because you’re allowing these posts to be shown in a public Storify stream. I trust the people I’m part of a private group with – i don’t trust technology that ignores privacy settings who say “Don’t blame us if we post private stuff because someone in the group shared it.”

            API’s can recognize privacy settings (why do you think social scoring tools primarily have to use public Twitter feeds for their scores versus private conversations and communities?). It’s easy to shift blame, it’s less easy to do the right thing and build technology that filters private settings and blocks sharing. But the reward for any companies doing this is more than worth the effort.

        • Julie Pippert

          January 18, 2013 at 4:29 pm

          Not that simple IMHO. We get used to Facebook restricting us from sharing private content. You can trust people and trust privacy, yet accidentally or innocently share. I learned a lesson the hard way. There’s a point to that.

      • Julie Pippert

        January 18, 2013 at 4:26 pm

        That’s a great point, Danny! The tools do need to respect the privacy settings. We can use caution–such as choosing words wisely, setting privacy, being in private groups, etc. But as in this article, even a really good statement that reflected well on the person was not okay with her to share. She shared it in perceived privacy and public share could have negatively affected her job. Not because she said anything wrong, but because she was not able to make a public statement.

  4. Ike Pigott

    January 18, 2013 at 9:43 pm

    I would enjoy Storify so much more if it had more privacy options of its own.

    For example, it’s a great tool for curating a cross-platform, extended conversation. But what if I want to share that compilation with a limited group? Storify has no “Unlisted” option, like YouTube and Posterous have to great effect.

    Until it has that feature, I can’t afford to use it.

  5. Nick

    January 22, 2013 at 11:58 am

    Is this news? A friend can publish your content with storify or they can take screenshot of your post. Where is the difference?

  6. christof_ff

    January 23, 2013 at 5:45 am

    I don’t get what the problem is – they could just as easily take a screenshot, or publish private printed correspondence.
    Surely the lesson is don’t trust you innermost thoughts with stupid people who are likely to share it with the world??

  7. Edward Cullen

    March 8, 2013 at 12:35 am

    Nice post. I am fully agree and satisfied with your conclusion.

  8. Pingback: Your Private Facebook Posts Can Be Publicly Shared Through Storify | Live Shares Daily | Sharing Updated News Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media

The FBI has a new division to investigate leaks to the media

(MEDIA) The FBI has launched a division dedicated completely to investigating leaks, and the stats of their progress and formation are pretty surprising…

Published

on

fbi

Expanding its capability to investigate potential governmental leaks to the media, the Federal Bureau of Investigation (FBI) created a new unit to address those threats in 2018.

Documents obtained by TYT as a part of their investigation identify the need for the unit as being due to a “rapid” increase in the number of leaks to the media from governmental sources.

“The complicated nature of — and rapid growth in — unauthorized disclosure and media leak threats and investigations has necessitated the establishment of a new Unit,” one of the released and heavily redacted documents reads.

The FBI appeared to create accounting functions to support the new division, with one document dated in May 2018 revealing that a cost code for the new unit was approved by the FBI’s Resource Analysis Unit.

In August 2017, former Attorney General Jeff Sessions had stated that such a unit had already been formed to address such types of investigations, which he had deemed as being too few in number shortly after taking office in February 2017.

By November of the same year, Sessions claimed that the number of investigations by the Justice Department had increased by 800%, as the Trump administration sought to put an end to the barrage of leaks regarding both personnel and policy that appeared to come from within the ranks of the federal government.

The investigation and prosecution of leaks to the media from government reached a zenith under the Obama administration, using a United States law that originated over 100 years ago in 1917, and was long unused for such purposes.

The Espionage Act treats the unauthorized release of information deemed to be secret in the interests of national security and could be used to harm the interests of the United States or aid an enemy as a criminal act. While controversial in application, the administration used it to prosecute more than twice as many alleged leakers than had been addressed by all previous administrations combined, a total of 10 leak-related prosecutions.

In July 2018, Reality Winner, pled guilty to one felony count of leaking classified information in 2016, representing the first successful prosecution of those who leaked governmental secrets to the media under the Trump administration.

Winner, a former member of the Air Force and a contractor for the National Security Agency at the time of her arrest, was accused of sharing a classified report regarding alleged Russian involvement with the election of 2016 with the news media. Her agreed-upon sentence of 63 months in prison was longer than the average of those convicted for similar crimes, with the typical sentence ranging from one to three and a half years.

Defendants charged under the Espionage Act by the FBI are challenged in mounting their case by the fact that they are prohibited of using a defense of disclosure in the public interest as a defense to their actions.

Continue Reading

Social Media

MeWe – the social network for your inner Ron Swanson

MeWe, a new social media site, seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

Published

on

mute social media

Let’s face it: Facebook is kind of creepy. Between facial recognition technology, demanding your real name, and mining your accounts for data, social media is becoming increasingly invasive. Users have looked for alternatives to mainstream social media that genuinely value privacy, but the alternatives to Facebook have been lackluster.

MeWe is poised to change all of that, if it can muster up a network strong enough to compete with Facebook. On paper, the new social media site seems to offer everything Facebook does and more, but with privacy as a foundation of its business model. Said MeWe user Melissa F., “It’s about time someone figured out that privacy and social media can go hand in hand.”

MeWe prioritizes privacy in every aspect of the site, and in fact, users are protected by a “Privacy Bill of Rights.” MeWe does not track, mine, or share your data, and does not use facial recognition software or cookies. (In fact, you can take a survey on MeWe to estimate how many cookies are currently tracking you – apparently I have 18 cookies spying on me!)

ron swanson

You don’t have to share that “as of [DATE] my content belongs to me” status anymore.

Everything you post on MeWe belongs to you – the site does not try to claim ownership over your content – and you can download your profile in its entirety at any time. MeWe doesn’t even pester you with advertising. Instead of making money by selling your data (hence the hashtag #Not4Sale) or advertising, the site plans to profit by offering additional paid services, like extra data and bonus apps.

So what does MeWe do? Everything Facebook does, and more. You can share photos and videos, send messages or live chat. You can also attach voice messages to any of your posts, photos, or videos, and you can create Snapchat-like disappearing content.

You can also sync your profile to stash content in your personal storage cloud. Everything you post is protected, and you can fine-tune the permission controls so that you can decide exactly who gets to see your content and who doesn’t – “no creepy stalkers or strangers.”

MeWe is available for Android, iOS, desktops, and tablets.

This story was originally published in January 2016, but the social network suddenly appears to be gaining traction.

Continue Reading

Social Media

Reddit CEO says it’s impossible to police hate speech, and he’s 100% right

(SOCIAL MEDIA) Moderating speech online is a slippery slope, and Reddit’s CEO argues that it’s impossible. Here’s why censorship of hate speech is still so complicated.

Published

on

hate speech online

Reddit often gets a bad rap in the media for being a cesspool of offensive language and breeding grounds for extreme, harmful ideas. This is due in part to the company’s refusal to mediate or ban hate speech.

In fact, Reddit CEO Steve Huffman recently stated that it’s not possible for the company to moderate hate speech. Huffman noted that since hate speech can be “difficult to define,” enforcing a ban would be “a nearly impossible precedent to uphold.”

As lazy as that may sound, anyone who has operated massive online groups (as we do) knows this to be unfortunate but true.

Currently, Reddit policy prohibits “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people […or] that glorifies or encourages the abuse of animals.”

Just about anything else is fair game. Sure, subreddit forums have been shut down in the past, but typically as the result of public pressure. Back in 2015, several subreddits were removed, including ones focused on mocking overweight people, transgender folks, and people of color.

However, other equally offensive subreddits didn’t get the axe. Reddit’s logic was that the company received complaints that the now retired subreddits were harassing others on and offline. Offensive posts are permitted, actual harassment is not.

Huffman previously stated, “On Reddit, the way in which we think about speech is to separate behavior from beliefs.” So posting something horribly racist won’t get flagged unless there’s evidence that users crossed the line from free speech to harassing behavior.

Drawing the line between harassment and controversial conversation is where things get tricky for moderators.

Other social media sites like Facebook, Instagram, and Twitter at least make an attempt, though. So what’s holding Reddit back?

Well, for one, moderating hate speech isn’t a clear cut task.

Right now, AI can’t fully take the reins because to truly put a stop to hate speech, there must be an understanding of both language and intent.

Since current AI isn’t quite there yet, Facebook currently employs actual people for the daunting task. The company mostly relies on overseas contractors, which can get pretty expensive (and can lack understanding of cultural contexts).

Users post millions of comments to Reddit per day, and paying real humans to sift through every potentially offensive or harassing post could break the bank.

Most agree that cost isn’t a relevant excuse, though, so Facebook is looking into buying and developing software specializing in natural language processing as an alternative solution. But right now, Reddit does not seem likely to follow in Facebook’s footsteps.

While Facebook sees itself as a place where users should feel safe and comfortable, Reddit’s stance is that all views are welcome, even potentially offensive and hateful ones.

This April in an AMA (Ask Me Anything) a user straight up asked if obvious racism and slurs are against Reddit’s rules.

Huffman responded in part, “the best defense against racism and other repugnant views both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.”

So essentially, although racism is “not welcome,” it’s also not likely to be banned unless there is associated unacceptable behavior as well.

It’s worth noting that while Reddit as a whole does not remove most hate speech, each subreddit has its own set of rules that may dictate stricter rules. The site essentially operates as an online democracy, with each subreddit “state” afforded the autonomy to enforce differing standards.

Enforcement comes down to moderators, and although some content is clearly hateful, other posts can fall into grey area.

Researches at Berkeley partnered with the Anti-Defamation League recently partnered up to create The Online Hate Index project, an AI program that identifies hate speech. While the program was surprisingly accurate in identifying hate speech, determining intensity of statements was difficult.

Plus, many of the same words are used in hate and non-hate comments. AI and human moderators struggle with defining what crosses the line into hate speech. Not all harmful posts are immediately obvious, and when a forum receives a constant influx of submissions, the volume can be overwhelming for moderators.

While it’s still worth making any effort to foster healthy online communities, until we get a boost to AI’s language processing abilities, complete hate speech moderation may not be possible for large online groups.

Continue Reading
Advertisement

Our Great Parnters

The
American Genius
news neatly in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Emerging Stories