Connect with us

Social Media

UpTier makes it easy to promote your Zillow listing with QR codes

Published

on

Should homeowners wish to promote their property listing on Zillow, now they can with the drop of a link or simply entering the listing ID. Print it, wear it, share it, the world of QR codes is growing.

UpTier also creates socially aware QR codes for anyone, simply enter your social usernames, and create. It’s free, and easy.

We’re not sure what the pay model will look like in the future, but this leads us to the ultimate problem with QR Codes no one is talking about but us, and that’s QR spam.

QR spam will most likely stop QR use in its tracks unless a guaranteed safe transaction is presented. In fact, unless I know exactly what is on the other side of a code, I will not scan it. Why wouldn’t I want to scan it? Because my phone is worth more than the convenience of the 50/50 chance I’m taking by scanning a Trojan horse or ad serve onto my handheld device. I, like many people, have been trained not to click on links if I do not know where they’ll take me, and QR codes are the future of spam links.

Mark my words, until a trust source is created that all QR codes are fed through, the investment of real money into QR codes for Realtors is a risky proposition. If I see the code on a Realtor’s sign, hard printed, I am more likely to trust the QR destination than I am on random things like paper items such as business cards.

In fact today, I received a QR code in an email from a trusted source, did I scan it? No, I did not, and you shouldn’t either.

This is nothing on UpTier, I’m sure their product is sound, but in the future if ad supported, then it’s spam, is it not? And what of NEWS of the first ever QR shared virus? The publicity alone will dead end the future of QR use.

Solution? Is it time for certified codes? If so, what does that even mean or look like?

QR Spam, it’s what’s cookin’.

Benn Rosales is the Founder and CEO of The American Genius (AG), national news network for tech and entrepreneurs, proudly celebrating 10 years in publishing, recently ranked as the #5 startup in Austin. Before founding AG, he founded one of the first digital media strategy firms in the nation and also acquired several other firms. His resume prior includes roles at Apple and Kroger Foods, specializing in marketing, communications, and technology integration. He is a recipient of the Statesman Texas Social Media Award and is an Inman Innovator Award winner. He has consulted for numerous startups (both early- and late-stage), has built partnerships and bridges between tech recruiters and the best tech talent in the industry, and is well known for organizing the digital community through popular monthly networking events. Benn does not venture into the spotlight often, rather believes his biggest accomplishments are the talent he recruits, develops, and gives all credit to those he's empowered.

Continue Reading
Advertisement
11 Comments

11 Comments

  1. Steven Noreyko

    February 28, 2011 at 1:51 pm

    Seems pretty paranoid to me. How would a QR Code inject a Trojan or other virus on to your mobile device? I’d like to know if it’s even possible.

    Most of the scanner apps I’ve looked at on iPhone will sandbox the QR Code text within the app, and then offer to send you to a browser or make a phone call, etc. Not sure what BlackBerry or Android users have to deal with here.

    Seeing SPAM is certainly a problem with QR codes, but if you scan the code, that content is more HAM (to you) than SPAM since you ASKED to see the content.

    I’m curious to watch what happens in this space

    • Benn Rosales

      February 28, 2011 at 2:09 pm

      It isn’t ham if your once no ad QR is now filtered with an ad to support the QR provider.

      Paranoid?
      Come on. Clicking YES or NO on your computer screen was one upon a time seen as safe. lol Shortened links once upon a time never sent you to a malicious site, and today QR codes are obviously safe. Sure.:) It’s evolution.

  2. Ralph Bell

    February 28, 2011 at 4:50 pm

    @Ben I use my own QR software on my server. I at least know that it will never have ads. But like everything else on the net someone will find a way to take an originally great idea and turn it into spam…Facebook, Twitter, bit.ly, etc. All have succumb to the evils of online marketing.

  3. Joe Cascio

    February 28, 2011 at 7:53 pm

    I think the previous commenter was not out of line in using the term paranoia. This article doesn’t point out any specific threat, exploit or vulnerability owing uniquely to QR codes. It mentions no actual instances where a QR code in and of itself was used to deliver a virus or malware payload. It doesn’t even theorize on a QR-specific vulnerability or threat vector.

    I did a little research on this topic and the only specific vulnerability regarding QR codes I found had to with, guess what… Windows ActiveX. And that was some years ago and has undoubtedly been rectified.

    Let’s look at the facts here. QR codes are merely data. You can’t put anything in a QR code URL that you can’t put in a URL that you publish on a web page. You can’t embed binary executable code in QR. A QR code doesn’t contain any threat that doesn’t already exist, as far as I can see. If you use iOS or Android, the phone will prevent the browser from installing or executing any code you didn’t specifically authorize or that comes from their app stores.

    But the point is, it’s no different from clicking a malicious link on a web page. If you take the proper precautions to protect yourself from malicious web pages, a QR code won’t hurt you either.

    If the author knows of a particular new or unique threat presented by QR codes, then he should state it and stop hand-waving. If he knows of or has heard of a case where a QR code was used as an exploit vector then he should give us what facts he has.

    QR codes may be new to the author and many others in the US, but they have been in use for years by the millions all over Japan and Europe. If there was some particular threat owing to their use, I presume we would have heard at least something about it by now.

  4. Stacy Chapman

    February 28, 2011 at 11:32 pm

    Interesting post Benn! We have been considering adding QR codes into our event ticketing software, and I really never considered the possibility of spam on the other side of the URL. I’m not sure if the thought of potential spam will sway me from scanning in the future, but it will definitely make me think twice if I don’t know the source.

    This past weekend, I actually scanned several QR codes on my phone from real estate signs and was excited at how easy I could get house data when a flyer was not present. The only frustration I saw from the scan was the slow speed it took to pull up a few of the websites that were driven entirely in Flash due to the high number of images and virtual tours.

  5. Dawn Green

    March 1, 2011 at 12:05 am

    Hey Benn, great article! We’ve been thinking that adding QR codes to our print-at-home PDF tickets was in a future upgrade, but you blind-sided me with this! Obviously, I’ll take a wait-and-see approach on this one.

    Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media

Facebook pays $52M to content mods with PTSD, proving major flaw in their business

(SOCIAL MEDIA) Facebook will pay out up to millions to former content moderators suffering PTSD to settle the 2018 class action lawsuit.

Published

on

content moderators

Facebook’s traumatized former content moderators are finally receiving their settlement for the psychological damage caused by having to view extremely disturbing content to keep it off of Facebook.

The settlement is costing the company $52 million, distributed as a one time payment of $1,000 to each of the 10,000+ content moderators in four states. If any of these workers seek psychological help and are diagnosed with psychological conditions related to their jobs, Facebook also has to pay for that medical treatment. They pay up to $50,000 per moderator in additional damages (on a case-by-case basis).

Facebook also will offer psychological counseling going forward, and will attempt to create a type of screening for future candidates to determine a candidate’s emotional resiliency, and will make one-on-one mental health counseling available to content moderators going forward. They will also give moderators the ability to stop seeing specific types of reported content.

According to NPR, Steve Williams, a lawyer for the content moderators, said, “We are so pleased that Facebook worked with us to create an unprecedented program to help people performing work that was unimaginable even a few years ago. The harm that can be suffered from this work is real and severe.”

Honestly, this job is not for the faint of heart, to say the least. Like the hard-working, yet not unfazeable police officers on Law & Order SVU, seeing the worst of humanity takes a toll on one’s psyche. Facebook’s content moderators are only human, after all. These workers moderated every conceivable–and inconceivable–type of disturbing content people posted on the 2 billion-users-strong social media platform for a living. Some for $28,800 a year.

I wouldn’t last five minutes in this role. It is painful to even read about what these content moderators witnessed for eight hours a day, five days a week. While Facebook refuses to admit any wrongdoing, as part of the agreement, come on, man. Graphic and disturbing content that upset someone enough to report to Facebook is what these people viewed all day every day. It sounds almost like a blueprint for creating trauma.

This settlement surely sets the precedent for more class action lawsuits to come from traumatized content moderators on other social media platforms. The settlement also shows this business model for what it is: flawed. This isn’t sustainable. It’s disgusting to think there are people out there posting heinous acts, and I am grateful the platform removes them.

However, they have to come up with a better way. Facebook employs thousands upon thousands of really smart people who are brilliant at computer technology. Twitter and YouTube and similar platforms do, too. They need to come up with a better plan going forward, instead of traumatizing these unfortunate souls. I don’t know what that will look like. But with Facebook’s sky-high piles of money and access to so many brilliant minds, they can figure it out. Something’s got to give. Please figure it out.

Continue Reading

Social Media

Twitter will give users a warning before a harmful tweet is sent

(SOCIAL MEDIA) Twitter is rolling out a new warning giving users a chance to edit their tweet before they post “harmful” language, and we aren’t sure how to feel about it.

Published

on

twitter warning

Twitter is testing out a new warning system for potentially offensive tweets. If a tweet contains language Twitter deems “harmful,” Twitter will pop up with a warning and opportunity to revise the potentially offensive tweet before posting. The warning mentions that language in the tweet is similar to previously reported tweets.

If internal alarms are going off in your head, congratulations, you are wary of any censorship! However, if you read a tweet spewing with bile, racism, or threatening violence against a person or institution, do you report it? Do you want Twitter to take it down? If you said yes, then congratulations, you want to protect the vulnerable and fight hatred.

If you are wary of censorship, yet want to fight hatred and protect the vulnerable, welcome to the interwebs! It’s a crazy and precarious place where almost anything can happen. Despite decades of use, we’re still navigating our way through the gauntlet of tough decisions the proliferation of platforms and ease of use have given us.

First, how does Twitter gauge a potentially harmful tweet? According to Twitter, the app responds to language similar to prior tweets that people have reported. Twitter, like Facebook, Instagram, and other social platforms, already has hateful conduct rules in place. In fact, Twitter has a host of rules and policies intended to protect users from fraud, graphic violence, or explicitly sexual images.

Their rationale is detailed, but explains, “Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” However, they “recognise that if people experience abuse on Twitter, it can jeopardize their ability to express themselves.”

We’ve heard stories of teenagers–or even younger children–killing themselves after relentless bullying online. The feeling of anonymity when insulting a living, breathing being from behind a computer screen often causes a nasty pile-on effect. We’ve seen people use social media to bully, sexually harass, and threaten others.

Twitter cites research showing women, people of color, LGBTQIA+ individuals, and other vulnerable populations are more likely to stop expressing themselves freely when someone abuses them on social media. Even Kelly Marie Tran, who played Resistance fighter Rose Tico in Star Wars, took down her Instagram photos before taking a stand against haters. And she had Jedis in her corner. Imagine your average person’s response to such cruel tactics?

We’ve seen hate groups and terrorist organizations use social media to recruit supporters and plan evil acts. We see false information springing up like weeds. Sometimes this information can be dangerous, especially when Joe Blow is out there sharing unresearched and inaccurate medical advice. Go to sleep, Blow, you’re drunk.

As an English major, and an open-minded person, I have a problem with censorship. Banned books are some of my favorites of all time. However, Twitter is a privately owned platform. Twitter has no obligation to amplify messages of hate. They feel, and I personally agree, that they have some responsibility to keep hateful words inciting violence off of their platform. This is a warning, not a ban, and one they’re only rolling out to iOS users for now.

I mean, in the history of angry rants, when was the last time a “Hey, calm down, you shouldn’t say that” ever made the person less angry or less ranty? Almost never. In which case, the person will make their post anyway, leaving it up to masses to report it. At that time, Twitter can make the decision to suspend the account and tell the user to delete it, add a warning, or otherwise take action.

Every once in a while, though, someone may appreciate the note. If you’ve ever had a colleague read an email for “tone” in a thorny work situation, you know heeding a yellow flag is often the wisest decision. This warning notice gives users a chance to edit themselves. As a writer, I always appreciate a chance to edit myself. If they flag every damn curse word, though, that will get real annoying real fast. You’re not my mom, Twitter. You’re not the boss of me.

This isn’t your great granddaddies’ book burning. This is 2020. The internet giveth; the internet taketh away. It’s a crying shame that evil creeps in when we’re not looking. Speech has consequences. Users can’t edit tweets, so once it’s out there, it’s out there. Even if they delete a tweet within moments of posting, anyone can screenshot that baby and share it with the world. Part of me says, “Good, let the haters out themselves.”

Twitter has shown itself to be open to differences in opinion, encouraging freedom of expression, and has opened up a whole new line of communication for traditionally underrepresented populations. They are a private company, and their rules and policies are posted. What, you didn’t read the terms of use? Gasp!

It’s Twitter’s rodeo, after all. This warning gives users a quick, added heads up to posting something that will likely be reported/removed anyway. For better or worse, Twitter’s still leaving it up to users to post what they want and deal with the potential fallout. Hey, I have a great idea! How about we all be respectful of each other on the internet, and Twitter won’t have to come up with this kind of thing.

Continue Reading

Social Media

Yelp adds virtual services classification to help during COVID

(SOCIAL MEDIA) Yelp constantly adds new classifications for how to find a business to meet your needs, now because of COVID they have added virtual services.

Published

on

Yelp virtual services

Yelp is making efforts to accommodate businesses whose operations are adapting in response to the coronavirus pandemic. Several new features will help businesses display updated services.

The company has added an information category titled virtual service offerings. Business can display service option such as classes, virtual consultations, performances, and tours. Yelpers can search for businesses based upon those offerings.

Yelp has already noticed trends where users are incorporating virtual services into their business profiles. In an report by TechCrunch, Yelp’s head of consumer product Akhil Kuduvalli said “With these new product updates, businesses of all types that are adapting and changing the way they operate will be able to better connect with their customers and potentially find new ones.”

Virtual services in categories like fitness, gyms, home services, real estate, and health are already increasing in popularity. Yelp intends to showcase businesses that are providing those services by creating new Collections.

Once business owners update their virtual service offerings on their Yelp for Business profiles, we will surface those updates to consumers through new call-to-action buttons, by updating the home screen and search results with links to groups of businesses offering these new virtual services, as well as surfacing them in other formats like Collections,” said Kudvalli.

Also in the works is a curbside pickup category for restaurants. Additionally, Yelp introduced a free customized banner for businesses to post updates on their profiles. About 224,000 businesses have used the banner so far.

Yelp hasn’t stopped there. It’s made its Connect feature (which allows businesses to share important updates to all Yelpers on their profile and their email subscribers) free to eligible local businesses as part of the Yelp’s commitment to waive $25 million in fees to support businesses in need during the COVID-19 crisis.

During COVID-19 businesses and consumers need all the help they can get, and thankfully Yelp is there to – help.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!