Last week, Instagram, Whatsapp, and Facebook suffered a massive outage around the world that lasted for most of the day. In typical Internet fashion, frustrated users took to Twitter to vent their feelings. A common thread throughout all of the dumpster fire gifs was the implication that these social media platforms were a necessary outlet for connecting people with information—as well as being an emotional outlet for whatever they felt like they needed to share.
It’s this dual nature of social media, both as a vessel for content that people consume, as well as a product that they share personal data with (for followers, but also knowing that the data is collected and analyzed by the companies) that confuses people as to what these things actually are. Is social media a form of innovative technology, or is it more about the content, is it media? Is it both?
Well, the answer depends on how you want to approach it.
Although users may say that content is what keeps them using the apps, the companies themselves purport that the apps are technology. We’ve discussed this distinction before, and how it means that the social media giants get to skirt around having more stringent regulation.
But, as many point out, if the technology is dependent on content for its purpose (and the companies’ profit): where does the line between personal information and corporate data mining lie?
Should social media outlets known for their platform being used to perpetuate “fake news” and disinformation be held to higher standards in ensuring that the information they spread is accurate and non-threatening?
As it currently stands, social media companies don’t have any legislative oversight—they operate almost exclusively in a state of self-regulation. This is because they are classified as technology companies rather than media outlets.
This past summer, Senator Mark Warner from Virginia suggested that social media, such as Twitter, Facebook, and Instagram, needed regulation in a widely circulated white paper. Highlighting the scandal by Cambridge Analytica which rocked the polls and has underscored the potential of social media to sway real-life policy by way of propaganda,
Warner suggested that lawmakers target three areas for regulation: fighting politically oriented misinformation, protecting user privacy, and promoting competition among Internet markets that will make long-term use of the data collected from users.
Warner isn’t the only person who thinks that social media’s current state of self-regulation unmoored existence is a bit of a problem, but the problem only comes from what would be considered a user-error: The people using social media have forgotten that they are the product, not the apps.
Technically, many users of social media have signed their privacy away by clicking “accept” on terms and conditions they haven’t fully read.* The issues of being able to determine whether or not a meme is Russian propaganda isn’t a glitch in code, it’s a way to exploit media illiteracy and confirmation bias.
So, how can you regulate human behavior? Is it on the tech companies to try and be better than the tendencies of the people who use them? Ideally they wouldn’t have to be told not to take advantage of people, but when people are willingly signing up to be taken advantage of, who do you target?
It’s a murky question, and it’s only going to get trickier to solve the more social media embeds itself into our culture.
*Yes, I’m on social media and I blindly clicked it too! He who is without sin, etc.