Connect with us

Tech News

Are we really ready to be under constant video surveillance?

(TECHNOLOGY) Facial recognition technology is happening, now. What does it mean, who does it benefit, and who makes the rules?

Published

on

facial recognition

Facial recognition technology is growing quickly. More and more applications are asking for a look at your face as the ultimate in security. What does that mean, and what are the consequences?

You’re a digital-enabled human. That means, in all likelihood, some combination of Apple, Facebook, or Google knows everything about you that matters. ‘Tis the nature of the Almighty Cloud.

At the moment, the cloud(s) consist(s) of data you gave it voluntarily. If facial recognition were to become standard, to replace user IDs and credit card numbers as identification the way those things replaced signatures, it would link your physical self to that data.

In theory, anyone with the dough for a security camera or point-of-sale machine could buy the knowledge of what you’re doing and when you’re doing it, anywhere, anytime, so long as you were in eyeshot of a networked device.

Also in theory, fraud would be impossible, no criminal would go free, and no innocent person would ever be convicted of a crime. Right. Riiight.

Faces are unique, there’d be a camera on everything, and first in line to buy themselves some Every Breath You Take benevolent stalker gear would be the police. After all, if you’ve got a driver’s license, a residency card, a passport, or about nineteen other governmental thingamajigs, the Powers That Be already have your face. They’re just trusting humans to identify it. Robots might be better!

They also might not be (remember when police robots couldn’t tell the difference between a picture of sand dunes and a butt?).

Which is it? Who’s to say? Who gets to say?

The Verge recently asked that very question of a panel of very smart people. The result was a continuum of views on regulation of facial recognition technology, which is to say, at least 1 of these 5 people has probably correctly guessed how you’ll be interacting with technology for the next 50 years.

Listen up.

Lots of people are pro-regulation, but not always for obvious reasons.

First, as always, are the philosophers. Philosophers have been fretting about tech for so long one of the cave glyphs at Lascaux probably translates to “Fire: Is Society Ready?”

But philosophers are by no means always wrong, and in this case several have correctly noted that facial recognition technology is being marketed before the discussion of its limits has even begun.

Right now, all the decisions on what the tech can and can’t do are being made by people who stand to benefit if it sells well.Click To Tweet

More moderate voices, ironically, speak to what could be even more serious concerns. Algorithms remain badly flawed when used in human-facing roles (remember Salter’s Law: for every person you replace putting AI in a customer-facing job, you will have to hire at least two more to handle the fallout when it screws up) and notoriously tend to perpetuate societal failings.

Current facial recognition software, for instance, has white guys down pat, but struggles to differentiate between people of color, women, children and the elderly. Likewise, it has an ugly habit of identifying innocent people as criminals if they happen to belong to the same minority group. The data we collect as a culture reflects our cultural biases, and all an algorithm can EVER do, is parse that data.

This is enough of a problem that many facial recognition companies are in favor of regulation, seeking to set development parameters from “go” in order to keep from perpetuating old ills.

On the anti-regulation side, shockingly, are early adopters who jumped in headfirst without triple-checking the consequences, and a bunch of people who sell facial recognition technology would quite like to have all the money, now, please.

They also have an extremely important point. The plain fact is that regulation cannot keep up with innovation.

Culture moves too quickly for laws to catch up now, and legislators are notoriously not tech-savvy. The people best qualified to understand exactly how facial recognition technology works, and therefore, to determine what limitations are necessary, are the people making it.

Opponents of regulation also point to the successes of facial recognition as implemented to date. Facial recognition has been used successfully in fields ranging from law enforcement to device security to shortening lines at the airport. Don’t know about y’all, but we at AG are all for improving all of those things.

So as of today, you are being surveilled. That’s fact.

If you’re in the States, over the course of your day, you will likely be surveilled by several different private entities. Including us, by the way. Hi! We call it “consumer data,” but it’s surveillance. If you’re in China, Russia or the UK, there’s an excellent chance your primary voyeur is the government instead, since they have the most active state-run surveillance systems. It’s the price of the Digital Age; someone is watching. How much are you willing to let them see?

In China, citizens are used to (therefore fine with) the government watching their every move on camera, but Americans aren’t historically open to Big Brother watching.

So, we’re really asking – is effortless, contactless shopping, travel and tech worth surrendering your face to the Omniscient Eye? Or is inefficiency a price worth paying for holding onto just that much of your privacy?

It bears repeating: facial recognition is happening, now. Decide quickly.

Matt Salter is a writer and former fundraising and communications officer for nonprofit organizations, including Volunteers of America and PICO National Network. He’s excited to put his knowledge of fundraising, marketing, and all things digital to work for your reading enjoyment. When not writing about himself in the third person, Matt enjoys horror movies and tabletop gaming, and can usually be found somewhere in the DFW Metroplex with WiFi and a good all-day breakfast.

Tech News

Experts warn of actual AI risks – we’re about to live in a sci fi movie

(TECH NEWS) A new report on AI indicates that the sci fi dystopias we’ve been dreaming up are actually possible. Within a few short years. Welp.

Published

on

AI robots

Long before artificial intelligence (AI) was even a real thing, science fiction novels and films have warned us about the potentially catastrophic dangers of giving machines too much power.

Now that AI actually exists, and in fact, is fairly widespread, it may be time to consider some of the potential drawbacks and dangers of the technology, before we find ourselves in a nightmarish dystopia the likes of which we’ve only begun to imagine.

Experts from the industry as well as academia have done exactly that, in a recently released 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, Mitigation.”

The report was written by 26 experts over the course of a two-day workshop held in the UK last month. The authors broke down the potential negative uses of artificial intelligence into three categories – physical, digital, or political.

In the digital category are listed all of the ways that hackers and other criminals can use these advancements to hack, phish, and steal information more quickly and easily. AI can be used to create fake emails and websites for stealing information, or to scan software for potential vulnerabilities much more quickly and efficiently than a human can. AI systems can even be developed specifically to fool other AI systems.

Physical uses included AI-enhanced weapons to automate military and/or terrorist attacks. Commercial drones can be fitted with artificial intelligence programs, and automated vehicles can be hacked for use as weapons. The report also warns of remote attacks, since AI weapons can be controlled from afar, and, most alarmingly, “robot swarms” – which are, horrifyingly, exactly what they sound like.

Read also: Is artificial intelligence going too far, moving too quickly?

Lastly, the report warned that artificial intelligence could be used by governments and other special interest entities to influence politics and generate propaganda.

AI systems are getting creepily good at generating faked images and videos – a skill that would make it all too easy to create propaganda from scratch. Furthermore, AI can be used to find the most important and vulnerable targets for such propaganda – a potential practice the report calls “personalized persuasion.” The technology can also be used to squash dissenting opinions by scanning the internet and removing them.

The overall message of the report is that developments in this technology are “dual use” — meaning that AI can be created that is either helpful to humans, or harmful, depending on the intentions of the people programming it.

That means that for every positive advancement in AI, there could be a villain developing a malicious use of the technology. Experts are already working on solutions, but they won’t know exactly what problems they’ll have to combat until those problems appear.

The report concludes that all of these evil-minded uses for these technologies could easily be achieved within the next five years. Buckle up.

Continue Reading

Tech News

Daily Coding Problem keeps you sharp for coding interviews

(CAREER) Coding interviews can be pretty intimidating, no matter your skill level, so stay sharp with daily practice leading up to your big day.

Published

on

voice and SEO

Whether you’re in the market for a new coding job or just want to stay sharp in the one you have, it’s always important to do a skills check-up on the proficiencies you need for your job. Enter Daily Coding Problem, a mailing list service that sends you one coding problem per day (hence the name) to keep your analytical skills in top form.

One of the founders of the service, Lawrence Wu, stated that the email list service started “as a simple mailing list between me and my friends while we were prepping for coding interviews [because] just doing a couple problems every day was the best way to practice.”

Now the service offers this help for others who are practicing for interviews or for individuals needing to just stay fresh in what they do. The problems are written by individuals who are not just experts, but also who aced their interviews with giants like Amazon, Google, and Microsoft.

So how much would a service like this cost you? Free, but with further tiers of features for additional money. Like with all tech startups, the first level offers the basic features such as a single problem every day with some tricks and hints, as well as a public blog with additional support for interviewees. However, if you want the actual answer to the problem, and not just the announcement that you incorrectly answered it, you’ll need to pony up $15 per month.

The $15 level also comes with some neat features such as mock interview opportunities, no ads, and a 30 day money back guarantee. For those who may be on the job market longer, or who just want the practice for their current job, the $250 level offers unlimited mock interviews, as well as personal guidance by the founders of the company themselves.

Daily Coding Problem enters a field with some big players with a firm grasp on the market. Other services, like InterviewCake, LeetCode, and InterviewBit, offer similar opportunities to practice mock interview questions. InterviewCake offers the ability to sort questions by the company who typically asks them for that individual with their sights targeted on a specific company. InterviewBit offers referrals and mentorship opportunities, while LeetCode allows users to submit their own questions to the question pool.

If you’ve really got your eye on the prize of receiving that coveted job opportunity, Daily Coding Problem is a great way to add another tool in your tool box to ace that interview.

Continue Reading

Tech News

Quickly delete years of your stupid Facebook updates

(SOCIAL MEDIA) Digital clutter sucks. Save time and energy with this new Chrome extension for Facebook.

Published

on

facebook desktop

When searching for a job, or just trying to keep your business from crashing, it’s always a good idea to scan your social media presence to make sure you’re not setting yourself up for failure with offensive or immature posts.

In fact, you should regularly check your digital life even if you’re not on the job hunt. You never know when friends, family, or others are going to rabbit hole into reading everything you’ve ever posted.

Facebook is an especially dangerous place for this since the social media giant has been around for over fourteen years. Many accounts are old enough to be in middle school now.

If you’ve ever taken a deep dive into your own account, you may have found some unsavory posts you couldn’t delete quickly enough.

We all have at least one cringe-worthy post or picture buried in years of digital clutter. Maybe you were smart from the get-go and used privacy settings. Or maybe you periodically delete posts when Memories resurfaces that drunk college photo you swore wasn’t on the internet anymore.

But digging through years of posts is time consuming, and for those of us with accounts older than a decade, nearly impossible.

Fortunately, a Chrome extension can take care of this monotonous task for you. Social Book Post Manager helps clean up your Facebook by bulk deleting posts at your discretion.

Instead of individually removing posts and getting sucked into the ensuing nostalgia, this extension deletes posts in batches with the click of a button.

Select a specific time range or search criteria and the tool pulls up all relevant posts. From here, you decide what to delete or make private.

Let’s say you want to destroy all evidence of your political beliefs as a youngster. Simply put in the relevant keyword, like a candidate or party’s name, and the tool pulls up all posts matching that criteria. You can pick and choose, or select all for a total purge.

You can also salt the earth and delete everything pre-whatever date you choose. I could tell Social Book to remove everything before 2014 and effectively remove any proof that I attended college.

Keep in mind, this tool only deletes posts and photos from Facebook itself. If you have any savvy enemies who saved screenshots or you cross-posted, you’re out of luck.

The extension is free to use, and new updates support unliking posts and hiding timeline items. Go to town pretending you got hired on by the Ministry of Truth to delete objectionable history for the greater good of your social media presence.

PS: If you feel like going full scorched Earth, delete everything from your Facebook past and then switch to this browser to make it harder for Facebook to track you while you’re on the web.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!