Connect with us

Tech News

Context and why it matters that AI doesn’t have a clue what it is

(TECHNOLOGY NEWS) AI is learning and growing faster than ever. However, one flaw that AI cannot seem to get around is context.

Published

on

voice and SEO

Contextual oops

Let’s start with a story. This might be my favorite story in all the annals of geekdom, which is saying something for someone whose literal job is “purveyor of geeky stories.” On March 1, 1990, in AG’s beloved hometown of Austin, Texas, the United States Secret Service raided a “suspected ring of hackers.”

bar
It was a full-on, TV level bust: armed agents broke locks, tore up files, carted off computers, even did that “simultaneous raid so the masterminds can’t get word to their button men” thing at the home of one of the people involved. Hooray! The cops beat the bad guys! Not so much.

Right reason, wrong time

Three years and a court decision later, the Secret Service had to fess up: they’d raided a game company. A tabletop game company. As in paper and dice, neither noted for being connected to the Internet. They weren’t hackers. At all. They’d written a game about hackers, and in the grim darkness of 1990, the Secret Service was fuzzy on the difference. That poor guy who got his very own private raid? He wrote their cyberpunk setting, and had dared to do research on the subject.

That’s as close as anyone there got to l33t h4x0r doings, and it turned out to be close enough for armed cops in a private citizen’s living room without an invitation.

There’s a halfway happy ending to that story, involving money paid to the company, an epic tonguelashing from a circuit court judge, and the founding of the leading advocacy organization for digital privacy rights, but the point is the Secret Service. Their actions weren’t malicious. Stupid, yes. Hilarious in hindsight, absolutely. Catastrophic to a small business innocent of any wrongdoing, big time. But they thought they were doing the right thing. They just Did It Wrong.

Doing It Wrong

As AI saturates our lives, I reflect, as I often do, on Doing It Wrong. Fundamentally, that ridiculous case came down to a misunderstanding of context. The Secret Service didn’t have the background or expertise to differentiate between hacking and a game about hacking. That’s absurd, that’s their job, but they didn’t.

Hacking, at least most hacking, is still a bad thing.

As simultaneously hilarious and horrible as it is to pull the equivalent of yanking a guy off his couch and charging him with murder for shooting someone in “Call of Duty,” shooting people is generally undesirable outside a fictional context.

Does AI know that?

Can AI make the distinction between “die, [expletive here]” in your favorite combat simulator and “die, [expletive here]” when an unpleasant person attempts to end the pizza guy with a fork? Because the Secret Service couldn’t, and they were human. Humans are pre-built for context. Computers have to be made that way, and it’s usually really hard. That’s a shade worrisome, what with AI growing like kudzu and the data it collects being used for everything from market analysis to, yes, murder investigations.

So consider this a gentle reminder that even the smartest computer is still fundamentally a box of switches.

Zero and one, off and on, puts certain limitations on a binary system’s ability to comprehend the complex, subjective, frankly weird human condition. Getting AI to understand context is a top priority for some of the best minds in computer science, but while they’re working we h. sapiens will have to double down on patience and nuance, because one of our most pervasive tools won’t be very good at either. They may never be as good at it as we are, though three years ago I’d have said that about go.

Mind your audience

For at least the next few years, everyone from multinational corporations and national governments down to the data junkies and media consumers reading this article will need to exercise some extra caution when it comes to AI and its assessment of people and their doings.AI doesn’t understand us quite yet.

It is, if not blind, at least a little nearsighted when it comes to context, which is basically the most important human thing.Click To Tweet

We’re going to have to keep doing that part ourselves. Fail in this, and you risk becoming your own hilarious Doing It Wrong cautionary tale. Nobody wants that.

This story originally ran on July 26, 2017.

Matt Salter is a writer and former fundraising and communications officer for nonprofit organizations, including Volunteers of America and PICO National Network. He’s excited to put his knowledge of fundraising, marketing, and all things digital to work for your reading enjoyment. When not writing about himself in the third person, Matt enjoys horror movies and tabletop gaming, and can usually be found somewhere in the DFW Metroplex with WiFi and a good all-day breakfast.

Tech News

Experts warn of actual AI risks – we’re about to live in a sci fi movie

(TECH NEWS) A new report on AI indicates that the sci fi dystopias we’ve been dreaming up are actually possible. Within a few short years. Welp.

Published

on

AI robots

Long before artificial intelligence (AI) was even a real thing, science fiction novels and films have warned us about the potentially catastrophic dangers of giving machines too much power.

Now that AI actually exists, and in fact, is fairly widespread, it may be time to consider some of the potential drawbacks and dangers of the technology, before we find ourselves in a nightmarish dystopia the likes of which we’ve only begun to imagine.

Experts from the industry as well as academia have done exactly that, in a recently released 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, Mitigation.”

The report was written by 26 experts over the course of a two-day workshop held in the UK last month. The authors broke down the potential negative uses of artificial intelligence into three categories – physical, digital, or political.

In the digital category are listed all of the ways that hackers and other criminals can use these advancements to hack, phish, and steal information more quickly and easily. AI can be used to create fake emails and websites for stealing information, or to scan software for potential vulnerabilities much more quickly and efficiently than a human can. AI systems can even be developed specifically to fool other AI systems.

Physical uses included AI-enhanced weapons to automate military and/or terrorist attacks. Commercial drones can be fitted with artificial intelligence programs, and automated vehicles can be hacked for use as weapons. The report also warns of remote attacks, since AI weapons can be controlled from afar, and, most alarmingly, “robot swarms” – which are, horrifyingly, exactly what they sound like.

Read also: Is artificial intelligence going too far, moving too quickly?

Lastly, the report warned that artificial intelligence could be used by governments and other special interest entities to influence politics and generate propaganda.

AI systems are getting creepily good at generating faked images and videos – a skill that would make it all too easy to create propaganda from scratch. Furthermore, AI can be used to find the most important and vulnerable targets for such propaganda – a potential practice the report calls “personalized persuasion.” The technology can also be used to squash dissenting opinions by scanning the internet and removing them.

The overall message of the report is that developments in this technology are “dual use” — meaning that AI can be created that is either helpful to humans, or harmful, depending on the intentions of the people programming it.

That means that for every positive advancement in AI, there could be a villain developing a malicious use of the technology. Experts are already working on solutions, but they won’t know exactly what problems they’ll have to combat until those problems appear.

The report concludes that all of these evil-minded uses for these technologies could easily be achieved within the next five years. Buckle up.

Continue Reading

Tech News

Daily Coding Problem keeps you sharp for coding interviews

(CAREER) Coding interviews can be pretty intimidating, no matter your skill level, so stay sharp with daily practice leading up to your big day.

Published

on

voice and SEO

Whether you’re in the market for a new coding job or just want to stay sharp in the one you have, it’s always important to do a skills check-up on the proficiencies you need for your job. Enter Daily Coding Problem, a mailing list service that sends you one coding problem per day (hence the name) to keep your analytical skills in top form.

One of the founders of the service, Lawrence Wu, stated that the email list service started “as a simple mailing list between me and my friends while we were prepping for coding interviews [because] just doing a couple problems every day was the best way to practice.”

Now the service offers this help for others who are practicing for interviews or for individuals needing to just stay fresh in what they do. The problems are written by individuals who are not just experts, but also who aced their interviews with giants like Amazon, Google, and Microsoft.

So how much would a service like this cost you? Free, but with further tiers of features for additional money. Like with all tech startups, the first level offers the basic features such as a single problem every day with some tricks and hints, as well as a public blog with additional support for interviewees. However, if you want the actual answer to the problem, and not just the announcement that you incorrectly answered it, you’ll need to pony up $15 per month.

The $15 level also comes with some neat features such as mock interview opportunities, no ads, and a 30 day money back guarantee. For those who may be on the job market longer, or who just want the practice for their current job, the $250 level offers unlimited mock interviews, as well as personal guidance by the founders of the company themselves.

Daily Coding Problem enters a field with some big players with a firm grasp on the market. Other services, like InterviewCake, LeetCode, and InterviewBit, offer similar opportunities to practice mock interview questions. InterviewCake offers the ability to sort questions by the company who typically asks them for that individual with their sights targeted on a specific company. InterviewBit offers referrals and mentorship opportunities, while LeetCode allows users to submit their own questions to the question pool.

If you’ve really got your eye on the prize of receiving that coveted job opportunity, Daily Coding Problem is a great way to add another tool in your tool box to ace that interview.

Continue Reading

Tech News

Quickly delete years of your stupid Facebook updates

(SOCIAL MEDIA) Digital clutter sucks. Save time and energy with this new Chrome extension for Facebook.

Published

on

facebook desktop

When searching for a job, or just trying to keep your business from crashing, it’s always a good idea to scan your social media presence to make sure you’re not setting yourself up for failure with offensive or immature posts.

In fact, you should regularly check your digital life even if you’re not on the job hunt. You never know when friends, family, or others are going to rabbit hole into reading everything you’ve ever posted.

Facebook is an especially dangerous place for this since the social media giant has been around for over fourteen years. Many accounts are old enough to be in middle school now.

If you’ve ever taken a deep dive into your own account, you may have found some unsavory posts you couldn’t delete quickly enough.

We all have at least one cringe-worthy post or picture buried in years of digital clutter. Maybe you were smart from the get-go and used privacy settings. Or maybe you periodically delete posts when Memories resurfaces that drunk college photo you swore wasn’t on the internet anymore.

But digging through years of posts is time consuming, and for those of us with accounts older than a decade, nearly impossible.

Fortunately, a Chrome extension can take care of this monotonous task for you. Social Book Post Manager helps clean up your Facebook by bulk deleting posts at your discretion.

Instead of individually removing posts and getting sucked into the ensuing nostalgia, this extension deletes posts in batches with the click of a button.

Select a specific time range or search criteria and the tool pulls up all relevant posts. From here, you decide what to delete or make private.

Let’s say you want to destroy all evidence of your political beliefs as a youngster. Simply put in the relevant keyword, like a candidate or party’s name, and the tool pulls up all posts matching that criteria. You can pick and choose, or select all for a total purge.

You can also salt the earth and delete everything pre-whatever date you choose. I could tell Social Book to remove everything before 2014 and effectively remove any proof that I attended college.

Keep in mind, this tool only deletes posts and photos from Facebook itself. If you have any savvy enemies who saved screenshots or you cross-posted, you’re out of luck.

The extension is free to use, and new updates support unliking posts and hiding timeline items. Go to town pretending you got hired on by the Ministry of Truth to delete objectionable history for the greater good of your social media presence.

PS: If you feel like going full scorched Earth, delete everything from your Facebook past and then switch to this browser to make it harder for Facebook to track you while you’re on the web.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!