Connect with us

Tech News

Study finds 1,000 phrases that accidentally activate smart speakers

(TECH GADGETS) Don’t worry about accidentally activating your nosy smart speakers… unless, of course, you utter one of these 1,000 innocuous phrases.

Published

on

smart speakers

It’s safe to say that privacy concerns, especially in today’s digital era, are unquestionably valid. With new video recording technology making it easier to identify people at a glance (whether they like it or not) and concerns that your smart speakers are eavesdropping on you, it may feel like you’re bordering on slightly paranoid around modern technology.

After all, even though there have been cases of smart speakers picking up on intimate conversations, there’s absolutely no risk of them overhearing private things without your consent, right? Even though it’s been documented that these devices — including Cortana, Alexa, Siri, and Google Home — have listened in relationship spats, criminal activity, and even HIPAA-protected data, you’re totally in the clear.

Oh yeah. The thing is, everything that gets broadcast into your smart speaker? There’s a completely random chance that someone back at headquarters may decide to sift through it in order to improve AI learning.

And while most of the time these conversations are totally benign, it doesn’t change the fact that a complete stranger is getting an earful of your private life. In fact, these transmissions? Are actually completely admissible in court, as several murder cases have already demonstrated. Their key evidence was none other than poor Alexa herself.

But wait, wait. These smart speakers can only get your information if you activate them, and that requires you to clearly enunciate their names. Right? Um. Not exactly. Even though you may think that you need to speak crisply into the speaker to activate it, it turns out that these devices are highly sensitive to any suggestion that you might be talking to them. It’s almost like your dog when you even remotely glance at his bag of doggie treats in the corner: one crinkle and Fido comes running, begging for some kibble and ready to serve you.

It’s the same for your smart speakers. As it turns out, there are over a thousand words or phrases that can trigger your device and invite it to start recording your voice. These can range from the perfectly reasonable (Cortana hearing “Montana” and springing to attention) to the downright absurd (Alexa raising her hackles over the words “election” and “unacceptable”). Well, crap. Now what?

It’s no secret that someone is listening in on your conversations. That’s been clearly documented, researched, dissected, and even accepted at this point. However, if you thought that they’d only listen to it if you gave them implicit permission by activating your device (which, to be fair, should not even count as permission in the first place), you were wrong.

So what’s a privacy-loving person to do? Just suck it up and try to choose between the lesser of two evils? On one hand, yes, these smart speakers are super convenient and can make your life easier. On the other?

Well, if you’re a fan of your privacy, then perhaps these devices aren’t meant for you. At this point, you’ve got little recourse. These companies will continue to use your data, and there’s nothing stopping them from spying on you. That is, unless you prevent them from doing it in the first place.

If you want to keep your private conversations private, either unplug your smart speaker when you’re not using it, or don’t get one in the first place. Otherwise, you’ll continue to give your implied consent that you’re totes cool with them butting in on your personal life, and they’ll continue to be equally totes cool with using it without your permission.

Karyl is a Southern transplant, now living on the Central Coast with her husband. She's proud to belong to two very handsome cats, both of which have made it very clear as to where she ranks on the social hierarchy. When she's not working as an optician, you can either find her chipping away at her next science-fiction novel or training for an upcoming race. She holds an AAT in Psychology, which is just a fancy way of saying that she likes poking around inside people's brains. She's very socially awkward and has no idea how to describe herself, which is why this bio is just as dorky and weird as she is.

Tech News

AI technology is using facial recognition to hire the “right” people

(TECH NEWS) Artificial intelligence (AI) technology has made its way into the hiring process and while the intentions are good, I vote we proceed with extreme caution.

Published

on

AI technology facial recognition

Artificial intelligence technology has made its way into the hiring process and while the intentions are good, I vote we proceed with extreme caution.

A UK based consumer goods giant, Unilever, is just one of several UK companies who have begun using AI technology to sort through initial job candidates. The goal of this technology is to increase the number of candidates whom a company can interview at the initial stages of the hiring process and to improve response time for those candidates.

The AI, developed by American company Hirevue, analyzes a candidate’s language, tone, and facial expression during a video interview. Hirevue insists that their product is different from traditional facial recognition technologies because it analyzes far more data points.

Hirevue’s chief technology officer, Loren Larsen, says, “We get about 25,000 data points from 15 minutes of video per candidate. The text, the audio and the video come together to give us a very clear analysis and rich data set of how someone is responding, the emotions and cognitions they go through.”
This data is then used to rank candidates on a scale of 1 to 100 against a database of traits identified in previously successful candidates.

There are two main flaws to this system. First, unless this AI technology is pulling from a huge diverse data pool it could be unintentionally discriminating against people without even being aware of it. Human bias is not as easy to remove from the equation as AI proponents would have you believe.

As an example, how does this AI handle people who are disabled or whose facial expressions that read differently than the general population, such as people with Down Syndrome or those who have survived traumatic facial injuries?

Second, seeking to hire someone who possess the same qualities as the person who was previously successful at a role is shortsighted. There are many ways to accomplish the same task with above average results. Companies who adopt this low-risk mentality could be missing out on great opportunities long-term. You will never know what actually works best if you don’t try.

The big question here is whether or not AI technology is ready to influence the job market on this scale.

Continue Reading

Tech News

The ‘move fast and break things’ trend is finally over

(TECH NEWS) Time is running out for this decade — and for a popular Big Tech phrase responsible for a lot of collateral damage. What’s next?

Published

on

big tech move fast break stuff

Time is running out for the decade. With less than 20 days left, it’s got us reflecting on the journeys of different economic sectors in the United States. And no industry has had a more tumultuous time of it than Big Tech.

A lot has changed in ten years. For starters, Americans have become increasingly disillusioned with Silicon Valley. The Pew Research Center found that only 50 percent of Americans believe technology firms have a positive effect on the country. That statistic is not too bad on its own, but that’s down 21 percent from only four years ago. Gallup found in 2019 that 48 percent of Americans also want more regulations on Big Tech. And The New York Times called the 2010s as “the decade Big Tech lost its way”.

Maybe that’s why big wigs at these tech firms have been quietly ditching a concept that was their Golden Rule in the early part of the decade: Move Fast and Break Things.

This concept is a modern take on the adage “you can’t make an omelet without breaking a few eggs.” For most of these firms, any innovation justified some of the collateral damage within its wake. And this scrappy “build it now and worry about it later” philosophy was a favorite of not just Facebook and Twitter, but also of many venture capital firms too.

But not anymore. Outlets from Forbes to HBR are saying this doesn’t work for Big Tech in the 2020s. Here are some reasons why it’s over.

Stability

The Move Fast and Break Things manta encouraged devs to push their coding changes to go live and let the chips fall where they may. But bugs pile up. Enter technical debt.

“Technical debt happens every time you do things that might get you closer to your goal now but create problems that you’ll have to fix later,” said The Quantified VC in an article on Medium. “As you move fast and break things, you will certainly accumulate technical debt.”

If enough technical debt comes into play, any new line of code could be the thing that topples a firm like a house of cards. And now that the consumer is used to tech in their daily routines, interruptions in service are extremely bad news for everyone.

As Mark Zuckerburg himself said it: “When you build something that you don’t have to fix 10 times, you can move forward on top of what you’ve built.”

Trust

To get back some of the trust that has ebbed from Big Tech over the years, firms can’t just keep with the Move Fast and Break Things status quo.

“The public will continue to grow weary of perceived abuses by tech companies, and will favor businesses that address economic, social, and environmental problems,” said Hemant Taneja in his article for Harvard Business Review. “Minimum viable products must be replaced by minimum virtuous products that … build in guards against potential harms.”

It’s not about chasing the bottom dollar at the cost of the consumer. Losing trust will hurt any company if left unchecked for long.

Innovation

There’s a cap on advancement in our current technological state. It’s called Moore’s Law. And we’re rapidly approaching the theoretical limits of it.

“When you understand the fundamental technology that underlies a product or service, you can move quickly, trying out nearly endless permutations until you arrive at an optimized solution. That’s often far more effective than a more planned, deliberate approach,” said Greg Satell in his article for HBR.

Soon enough, Big Tech will be in relatively new waters with quantum computing, biofeedback and AI. There’s no way to move as fast as these technology firms have in the past. And even if they could, should they?

Big Tech has experienced major growing pains since the dawn of our new Millenium. And now that some firms are entering their 20s, there’s a choice to be made. Continue to grow up or keep using an idea that’s worn out it’s welcome with the consumer and that has no guarantee will work with future technologies.

Maybe that’s why Facebook’s motto is now “Move Fast with Stable Infrastructure.”

Continue Reading

Tech News

Computer vision helps AI create a recipe from just a photo

(TECH NEWS) It’s so hard to find the right recipe for that beautiful meal you saw on tv or online. Well computer vision helps AI recreate it from a picture!

Published

on

computer vision recreates recipe

Ever seen at a photo of a delicious looking meal on Instagram and wondered how the heck to make that? Now there’s an AI for that, kind of.

Facebook’s AI research lab has been developing a system that can analyze a photo of food and then create a recipe. So, is Facebook trying to take on all the food bloggers of the world now too?

Well, not exactly, the AI is part of an ongoing effort to teach AI how to see and then understand the visual world. Food is just a fun and challenging training exercise. They have been referring to it as “inverse cooking.”

According to Facebook, “The “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world,”

The concept of computer vision isn’t new. Computer vision is the guiding force behind mobile apps that can identify something just by snapping a picture. If you’ve ever taken a photo of your credit card on an app instead of typing out all the numbers, then you’ve seen computer vision in action.

Facebook researchers insist that this is no ordinary computer vision because their system uses two networks to arrive at the solution, therefore increasing accuracy. According to Facebook research scientist Michal Drozdzal, the system works by dividing the problem into two parts. A neutral network works to identify ingredients that are visible in the image, while the second network pulls a recipe from a kind of database.

These two networks have been the key to researcher’s success with more complicated dishes where you can’t necessarily see every ingredient. Of course, the tech team hasn’t stepped foot in the kitchen yet, so the jury is still out.

This sounds neat and all, but why should you care if the computer is learning how to cook?

Research projects like this one carry AI technology a long way. As the AI gets smarter and expands its limits, researchers are able to conceptualize new ways to put the technology to use in our everyday lives. For now, AI like this is saving you the trouble of typing out your entire credit card number, but someday it could analyze images on a much grander scale.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!