Connect with us

Tech News

How to opt out of Google’s robots calling your business phone

(TECH) Google’s robots now call businesses to set appointments, but not all companies are okay with talking to an artificial intelligence tool like a person. Here’s how to opt out.

Published

on

google duplex android

You know what’s not hard? Calling a restaurant and making a reservation. You know what’s even easier? Making that reservation though OpenTable. You know what we really don’t need, but it’s here so we have to deal with it? Google Duplex.

Falling under “just because we can do it, doesn’t mean we should do it,” Duplex, Google’s eerily human-sounding AI chat agent that can arrange appointments for Pixel users via Google Assistant has rolled out in several cities including New York, Atlanta, Phoenix, and San Francisco which now means you can have a robot do menial tasks for you.

There’s even a demo video of someone using Google Duplex to find an area restaurant and make a reservation and in the time it took him to tell the robot what to do, he could’ve called and booked a reservation himself.

Aside from booking the reservation for you, Duplex can also offer you updates on your reservation or even cancel it. Big whoop. What’s difficult to understand is the need or even demand for Duplex. If you’re already asking Google Assistant to make the reservation, what’s stopping you from making it yourself? And the most unsettling thing about Duplex? It’s too human.

It’s unethical to imply human interaction. We should feel squeamish about a robo-middleman making our calls and setting our appointments when we’re perfectly capable of doing these things.

However, there is hope. Google Duplex is here, but you don’t have to get used to it.

Your company can opt out of accepting calls by changing the setting in your Google My Business accounts. If robots are already calling restaurants and businesses in your city, give your staff a heads-up. While they may receive reservations via Duplex, at least they’ll be prepared to talk to a robot.

And if you plan on not opting out, at least train your staff on what to do when the Google robots call.

Meg Furey-Marquess is a Staff Writer at The American Genius. She has covered tech for The Metro Silicon Valley and The Bold Italic. She was named one of the Top 39 Writers on Medium in 2016.

Tech News

DIY: Project Alias protects your privacy from invasive smart speakers

(TECHNOLOGY) Smart speakers are beloved, and oh so helpful, but they’re always listening, no matter what tech companies say. This DIY stops that.

Published

on

project alias for smart speakers

DIY culture has a solution for everything, including protecting your privacy. Home assistant devices like Amazon Echo and Google Home, while helpful, are constantly listening for commands which means any nearby conversation is free game for information gathering. On the bright side, Karmann’s Project Alias is a “parasitic” device that gives you control in what your home assistant device hears.

True to its function, a Project Alias device looks like a parasitic growth that can fit atop your Amazon Echo or Google Home. Inside its 3D-printed shell are a Raspberry Pi A+, a ReSpeaker 2-Mics Pi HAT, and a pair of small speakers.

Once you install the Project Alias code, you can use your phone to connect to the device, and train it with a “wake” word. The Echo or Google Home will not hear you until you say this “wake” word to Project Alias.

Ta-da! Privacy is back in your hands! (Some assembly required).

The pieces to make your own Project Alias device, while attainable, (Office Max and Staples offer 3D printing services), requires some hands-on work and possibly several trips to the store. When all is said and done, the overall cost in time and money can add up. It’d be much simpler if a Project Alias device came in the mail or on a store shelf ready to roll.

This is sounding like prime Kickstarter material here.

The exploitation of privacy through our smart devices including phones, tablets, laptops, tvs, gaming consoles, etc. is becoming a common concern. Not only are we feeling attacked by advertisements that feel like they’re reading our thoughts, but anything said around these devices is collectable data.

Unfortunately, it’s wishful thinking to have any trust in the gadgets we own.

A device like Project Alias is a long time coming and needed now more than ever. Until tech companies begin to take measures to protect the privacy of their customers (which isn’t in their financial interest), we’re likely to see a new market for devices like Project Alias. The odds are you’ll need more than one.

Continue Reading

Tech News

Descript is a mindblowing editing shortcut for audio and video

(TECH NEWS) Descript is an automatic transcription tool that uses machine-learning to make transcribing easier.

Published

on

transcribe descript

Anyone getting into audio/video editing for the first time is almost immediately struck with the sheer enormity and complexity of it all. Even if you have the physical hardware, the proper software, and the creative spark to produce media, that doesn’t make the process of editing it all into a cohesive product any less daunting. For those of us struggling under the sisyphean weight of complicated editing workflows, a new product aims to relieve us all of this struggle. Enter Descript, an automatic transcription tool.

Descript uses machine-learning to transcribe your raw audio and video files into a dialogue script. This in itself is an incredibly valuable tool for anyone looking to transcribe podcasts, youtube videos, or whatever kind of media you produce. But this is just the beginning of what makes this app so special.

Descript is the world’s first audio word processor. Using the transcript the app creates from your audio, you can edit the text script to change the media itself. Removing the “umms” and “ahhs” from your speech — or removing whole sentences at a time — is as simple as using the backspace key on a word processor.

As a would-be podcaster, I played around with the app over the weekend, so I can tell you my initial impressions of the app. While it’s not for me (not yet, anyway), it is incredibly easy and fun and quite frankly mindblowing to use.

First things first, let’s talk about the cost.

The app works on a subscription model that pays by the minute. New users are able to upload up to 30 minutes of audio for free, but anything past that will require paying 15 cents per minute or signing up for a monthly subscription. Keep in mind these costs apply to total raw audio uploaded, not finished product audio produced. So if you’re the type (like me) to record several hours of audio per week only to trim it down to a single hour of product, this may be a bit on the wasteful side.

As for the transcription itself, the program’s machine-learning transcription transcribed my dulcet tones into the appropriate written words with nearly complete accuracy. I did have a few issues with the program understanding other speakers, but I believe that may have been a fault on my end that I’ll go into later. If the machine-learning transcription isn’t accurate enough for you, you can also choose to pay extra in order to have your audio specially transcribed by real human professionals.

The app can divide audio between different people speaking, but not automatically. If you have different audio files for each speaker, then each audio file will be labeled separately from the start. If multiple speakers are on the same audio track (like mine), then you’ll have to notate these differing speakers in the script yourself. I believe this is why the program had difficulty transcribing other speakers on the audio than myself. Being on the same audio track, the machine attuned itself to my voice (the first speaker on the recording) and was trying to interpret other people’s words as if I were the one saying them.

As for the audio editing aspect of this program, well, it really needs to be experienced to be believed. I was told what the program could do beforehand, but actually editing audio just by changing words around on a script is something else entirely. Cutting out non sequitur sentences, removing unnecessary articles, or even changing the order of words around to better suit the flow of conversation — through a literal word processor — will make you feel like an arcane grammar wizard.

Will this replace your entire audio/video workflow? Probably not. At least not yet. In addition to the cost factor which may be prohibitive to some users, there are some issues of editing that aren’t based on word choice. I found myself frustrated at my inability to change the timing of spaces between words, sometimes leaving gaps between sentences (or not enough space between words). Of course, I only had the program for a weekend, so this could very well be attributed to user error.

Whatever flaws real or imagined this program may have, it’s very important to keep in mind that Descript is the first of its kind.

It can only improve from here, not to mention potentially inspire a wave of similar programs that may very well function better. Whether or not Descript is right for you, what’s undeniable is that this program is the start of something amazing.

Continue Reading

Tech News

This eye tracking tech could be what saves VR

(TECHNOLOGY) VR has struggled with adoption rates, but this new technology could finally make it more useful in daily life.

Published

on

VR could be saved

The new HTC Vive Pro Eye VR headset made its debut at CES 2019. An updated version of the HTC Vive Pro, its features are expected to have a variety of uses over the long-term.

The Vive Pro Eye features new eye tracking technology developed in partnership with Tobii Eye Tracking. Inside the headset are sensors around the eyes to help the A.I. target what your eye is seeing. This is integrated into the UI design, allowing users to select menu options just by looking at their choice. In theory, users can choose how to interact with different A.I. characters or in VR chat spaces.

The eye tracking features Dynamic Foveated Rendering which will allow the computer to render VR objects the user is looking at to a high resolution. Likewise, images on the user’s periphery or outside the field of view will appear at a lower resolution or won’t be rendered at all. This way headset will require less performance power from its graphics card, and can still generate high-quality images in the places that matter.

Another feature is the A.I. assist where the computer can register intended targets in the VR environment based upon where your eyes are looking. This could be helpful for newcomers to VR instead of adjusting to the hand-eye coordination with the remote.

In a new industry like VR, the turnover rate for technology is fairly high, but the fovated rendering is likely to stay. Since its practicality not only enhances user experience, but also provides support from a hardware standpoint, its not outlandish to think developers will piggy-back off this new feature.

Sounds like fun? Well, currently the Vive Pro Eye is meant for business ventures rather than for consumers. But we’ll likely see this technology eventually find its way into more affordable VR products. There is no release date or price range yet available.

Continue Reading
Advertisement

Our Great Parnters

The
American Genius
news neatly in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Emerging Stories