Connect with us

Tech News

What the hell is Zero UI and what does it say about our future?

(TECH NEWS) You’ve probably started hearing people talking about “Zero UI” more and more. What does it mean? Let’s find out together.

Published

on

zero ui

The new buzz word

Everyone got a Google Home or Amazon Echo for the holidays and now I keep hearing the term “Zero UI” pop up. What does it mean? Prior to researching for this story I had no idea either, so don’t feel like a dim bulb – let’s learn together!

bar

Natural language and gestures

Zero UI focuses on interacting with technology in more natural ways, moving away from a screen-focused experience. Technology is now learning our language rather than vice versa.

So instead of providing stilted commands to our phones and devices, we can speak in a more human way or simply gesture.

Andy Goodman, who coined the term, explains “humans have always had to interact with machines in a really abstract, complex way.” Zero UI utilizes haptics, voice control, artificial intelligence, and voice control elements to make the whole experience more human.

In command, in control

Right now, voice control is at the forefront of the Zero UI goal. Although voice commands have been around for quite a while now, we’re seeing the most promising results in futuristic home technology through more integrated uses of it.

It turns out I’m not just imagining hearing Alexa’s name more frequently than usual this year. Amazon reported millions of Alexa devices were sold this holiday season. Sales for the Amazon Echo family were up nine times from last year and the Google Home devices spent some time on backorder in stores. Zero UI devices are quickly gaining traction with consumers.

As devices begin to interact with us in a more normal way, they move from a trendy piece of technology to an integral part of our lives.Click To Tweet

Goodman believes the future of Zero UI requires designing in a multi-demensional way–literally. He explains designers need to consider not just the 2D, linear way consumers use their products, but instead consider every possible interaction.

Devices must adapt to our stream-of-consciousness ways of interacting with them if they want to stay relevant. This includes not just voice commands, but more intuitive body language recognition systems.

Beyond the screen

Goodman notes zero UI devices will “need to have access to a lot of behavioral data, let alone the processing power to decode them.” Additionally, non-linear design is going to require completely different sets of skills than current app design. Designers will need to think beyond the screen when it comes to programming devices in order to create a highly adaptable device.

A quick note

While Zero UI does sound promising, Goodman assures that it’s not meant to be taken literally as a term.

User interface of some kind will always be present. It’s really a matter of moving away from screens, not the complete elimination of user interface.

#ZeroUI

Lindsay is an editor for The American Genius with a Communication Studies degree and English minor from Southwestern University. Lindsay is interested in social interactions across and through various media, particularly television, and will gladly hyper-analyze cartoons and comics with anyone, cats included.

Continue Reading
Advertisement
1 Comment

1 Comment

  1. Jack Smith

    January 20, 2017 at 6:39 am

    The problem for Echo/Alexa is the foundation difference. Google built theirs based on inference and NOT commands.

    So with Google Home you just talk naturally and say what you want. Wife can say it one way and I a completely different way and get the same result.

    We have had the Echo since it was launched and now have several of the Google Homes. We keep the Echo in the kitchen and then Google Homes in our bedroom and then the kids have their in their bedrooms.

    My fav feature right now is the ability to stay warm under the covers and control the TV. Just started working last week without me adding a skill or anything.

    Actually wife discoverd when watching a movie and kid walked into our room. Not sure if she was joking as she now does in situation that Google Home is NOT available. So like sitting at a traffic light she will say “hey google change light green”. She said “hey google pause” and the movie paused. When kid left she said “hey google rewind” and it went back some set amount.

    The Echo is a great piece of technology but the Google Home is just different as it seems to have more of a brain inside.

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

4 ways startups prove their investment in upcoming technology trends

(TECH NEWS) Want to see into the future? Just take a look at what technology the tech field is exploring and investing in today — that’s the stuff that will make up the world of tomorrow.

Published

on

Woman testing VR technology

Big companies scout like for small ones that have proven ideas and prototypes, rather than take the initial risk on themselves. So startups have to stay ahead of technology by their very nature, in order to be stand-out candidates when selling their ideas to investors.

Innovation Leader, in partnership with KPMG LLP, recently conducted a study that sheds light onto the bleeding edge of tech: The technologies that the biggest companies are most interested in building right now.

The study asked its respondents to group 16 technologies into four categorical buckets, which Innovation Leader CEO Scott Kirsner refers to as “commitment level.”

The highest commitment level, “in-market or accelerating investment,” basically means that technology is already mainstream. For optimum tech-clairvoyance, keep your eyes on the technologies which land in the middle of the ranking.

“Investing or piloting” represents the second-highest commitment level – that means they have offerings that are approaching market-readiness.

The standout in this category is Advanced Analytics. That’s a pretty vague title, but it generally refers to the automated interpretation and prediction on data sets, and has overlap with Machine learning.

Wearables, on the other hand, are self explanatory. From smart watches to location trackers for children, these devices often pick up on input from the body, such heart rate.

The “Internet of Things” is finding new and improved ways to embed sensor and network capabilities into objects within the home, the workplace, and the world at large. (Hopefully that doesn’t mean anyone’s out there trying to reinvent Juicero, though.)

Collaboration tools and cloud computing also land on this list. That’s no shock, given the continuous pandemic.

The next tier is “learning and exploring”— that represents lower commitment, but a high level of curiosity. These technologies will take a longer time to become common, but only because they have an abundance of unexplored potential.

Blockchain was the highest ranked under this category. Not surprising, considering it’s the OG of making people go “wait, what?”

Augmented & virtual reality has been hyped up particularly hard recently and is in high demand (again, due to the pandemic forcing us to seek new ways to interact without human contact.)

And notably, AI & machine learning appears on rankings for both second and third commitment levels, indicating it’s possibly in transition between these categories.

The lowest level is “not exploring or investing,” which represents little to no interest.

Quantum computing is the standout selection for this category of technology. But there’s reason to believe that it, too, is just waiting for the right breakthroughs to happen.

Continue Reading

Tech News

Internet of Things and deep learning: How your devices are getting smarter

(TECH NEWS) The latest neural network from Massachusetts Institute of Technology shows a great bound forward for deep learning and the “Internet of Things.”

Published

on

Woman using smart phone to control other devices in home, connected to deep learning networks

The deep learning that modifies your social media and gives you Google search results is coming to your thermostat.

Researchers at the Massachusetts Institute of Technology (MIT) have developed a deep learning system of neural networks that can be used in the “Internet of Things” (IoT). Named MCUNet, the system designs small neural networks that allow for previously unseen speed and accuracy for deep learning on IoT devices. Benefits of the system include energy savings and improved data security for devices.

Created in the early 1980s, the IoT is essentially a large group of everyday household objects that have become increasingly connected through the internet. They include smart fridges, wearable heart monitors, thermostats, and other “smart” devices. These gadgets run on microcontrollers, or computer chips with no processing system, that have very little processing power and memory. This has traditionally made it hard for deep learning to occur on IoT devices.

“How do we deploy neural nets directly on these tiny devices? It’s a new research area that’s getting very hot,” said Song Han, Assistant Professor of Computer Science at MIT who is a part of the project, “Companies like Google and ARM are all working in this direction.”

In order to achieve deep learning for IoT connected machines, Han’s group designed two specific components. The first is TinyEngine, an inference engine that directs resource management similar to an operating system would. The other is Tiny NAS, a neural architecture search algorithm. For those not well-versed in such technical terms, think of these things like a mini Windows 10 and machine learning for that smart fridge you own.

The results of these new components are promising. According to Han, MCUNet could become the new industry standard, stating that “It has huge potential.” He envisions the system has one that could help smartwatches not just monitor heartbeat and blood pressure but help analyze and explain to users what that means. It could also lead to making IoT devices far more secure than they are currently.

“A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”

It will still be a while until we see smart devices with deep learning capabilities, but it is all but inevitable at this point—the future we’ve all heard about is definitely on the horizon.

Continue Reading

Tech News

Google is giving back some privacy control? (You read that right)

(TECH NEWS) In a bizarre twist, Google is giving you the option to opt out of data collection – for real this time.

Published

on

Open laptop on desk, open to map privacy options

It’s strange to hear “Google” and “privacy” in the same sentence without “concerns” following along, yet here we are. In a twist that’s definitely not related to various controversies involving the tech company, Google is giving back some control over data sharing—even if it isn’t much.

Starting soon, you will be able to opt out of Google’s data-reliant “smart” features (Smart Compose and Smart Reply) across the G-Suite of pertinent products: Gmail, Chat, and Meet. Opting out would, in this case, prevent Google from using your data to formulate responses based on your previous activity; it would also turn off the “smart” features.

One might observe that users have had the option to turn off “smart” features before, but doing so didn’t disable Google’s data collection—just the features themselves. For Google to include the option to opt out of data collection completely is relatively unprecedented—and perhaps exactly what people have been clamoring for on the heels of recent lawsuits against the tech giant.

In addition to being able to close off “smart” features, Google will also allow you to opt out of data collection for things like the Google Assistant, Google Maps, and other Google-related services that lean into your Gmail Inbox, Meet, and Chat activity. Since Google knowing what your favorite restaurant is or when to recommend tickets to you can be unnerving, this is a welcome change of pace.

Keep in mind that opting out of data collection for “smart” features will automatically disable other “smart” options from Google, including those Assistant reminders and customized Maps. At the time of this writing, Google has made it clear that you can’t opt out of one and keep the other—while you can go back and toggle on data collection again, you won’t be able to use these features without Google analyzing your Meet, Chat, and Gmail contents and behavior.

It will be interesting to see what the short-term ramifications of this decision are. If Google stops collecting data for a small period of time at your request and then you turn back on the “smart” features that use said data, will the predictive text and suggestions suffer? Only time will tell. For now, keep an eye out for this updated privacy option—it should be rolling out in the next few weeks.

Continue Reading

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!