Connect with us

Tech Gadgets

Why Google’s Deep Dream project is more than just a trippy tricky

A programmer discovered that even while asleep, your computers dream, and the imagery is amazingly bizarre and sometimes creepy. What are the implications of this? Is it more than just a cool trick?

Published

on

google dreams

google dreams

Dreaming deep, sound asleep

As machines become increasingly intelligent, they are also becoming more artistic.

Google’s Deep Dream is making a huge splash on the web. It was originally coded by Alexander Mordvintsev, a programmer working in security systems who liked to play around with artificial intelligence as a side project. In the middle of the night last May, he discovered the lines of code that would cause Google’s neural net to generate original images that look like a psychedelic combination of Salvador Dalí and Lisa Frank. He posted his images on Google’s internal Google + account, and was soon paired with young programmer Chris Olah and software engineer/sculptor Mike Tyka to develop Deep Dream.

bar

REM for your RAM

The Deep Dream team has created an entire gallery of surrealistic art. Animal parts of different species combine to form fantastical beasts, backgrounds fill with swirling patterns, and spiders emerge from cloudless skies.

In July, the Deep Dream team released the software on GitHub so that the general public could turn their family portraits and vacation photos into bizarre art pieces. New apps are popping up, several grotesque portraits of presidential candidates have been produced, and the band Wilco used a Deep Dream image on the cover of its latest album. Samim Winiger, who created software that makes animations from Deep Dream images, says that “in five years we won’t recognize Photoshop,” alluding to the possibility for Deep Dream technology to become a major feature in our visual world.

But is there more to it?

Winiger refers to Deep Dream as “creative AI [artificial intelligence].” But can a computer be said to have creativity? The dreamlike (or, at times, nightmarish) quality of Deep Dream images has certainly caused some observers to posit that Deep Dream is pulling images from the “subconscious” of Google’s mind. But a computer, no matter how smart, is not a brain. So is Deep Dream just the robot equivalent of a cool party trick?

Deep learning in the neural net

But Deep Dream wasn’t created just to blow our minds with freakish four-eyed kittens and giant tarantulas crawling from the sky. It’s also a useful way for programmers to study artificial intelligence. Computers can now achieve what programmers call “deep learning” by processing information through a neural net (NN). Neural nets are meshes of artificial neurons layered one over the other, like spider webs. Information is passed through several layers of the NN, and each layer analyzes it from a different angle. The topmost layer is responsible for the output of information that has been “learned” by deeper layers of the net.

Google has made great strides towards teaching its neural net to visually recognize objects by having it produce an image of whatever it’s viewing, which is then graded for accuracy and fed back into the computer, giving the NN an opportunity to learn from its mistakes and eventually come to automatically correct itself.

Layered learning, and pattern detecting

So far, it has been hard for researchers to really know for sure what is happening at each layer of the neural net. But a researcher can have a computer produce a Deep Dream image from a specific layer of its neural net, thus revealing exactly what that layer is learning. In this way, researchers are discovering more about what happens inside an artificial mind.

What researchers have found is that computers may have higher perception and better pattern-recognition than humans. It’s like having a highly imaginative child watch clouds. If a cloud looks a little bit like a ship, the neural net will run the image through a feedback loop until a highly detailed ship emerges. This is why Deep Dream is able to create images even out of random noise – it can detect patterns that a human wouldn’t even notice.

This has far-reaching implications for how artificial intelligence may eventually replace humans. For example, researchers are using neural nets to read ultrasounds, detecting tumors invisible to the human eye.

Final thoughts

So, is artificial intelligence becoming creative? Is a computer an artist? That depends on how you define creativity, and where you draw the line between the “real” and the “artificial.” But Deep Dream engineer Mike Tyka is impressed: “If you think about human creativity, some small component of that is the ability to take impressions and recombine them in interesting, unexpected ways,” – the same ability Deep Dream displays.

Regardless of whether or not this is true “creativity,” the world seems to agree with Tyka that when you let a computer come up with original art, “it’s cool.”

Steven Levy was granted the first interview with the Deep Dream team. You can read his report at Medium.com.

#DeepDream

Ellen Vessels, a Staff Writer at The American Genius, is respected for their wide range of work, with a focus on generational marketing and business trends. Ellen is also a performance artist when not writing, and has a passion for sustainability, social justice, and the arts.

Tech Gadgets

Google acquires AR manufacturer, North, but what for?

(TECH GADGETS) Google has recently purchased North, an AR startup that boasts impressive 3-D holographic visual displays, but what they plan to do with this new merger is unclear.

Published

on

google glass

If you allowed pop culture to influence your beliefs about what the 21st century might look like, then you — like most of society — have probably not-so-secretly been hoping that today might vaguely resemble the marvels promised to us from the Back to the Future franchise. After all, we were all assured that we’d have hoverboards to shuttle around on, 3-D holographic advertisements to admire, and a Florida baseball team to root for.

Reality, however, has proven to be starkly different than this fantasy. Sadly, we only got one of these three incredible offerings, but the tech startup, North, is now trying to change all of that by providing us with a new, augmented reality alternative.

It’s fair to say that North, an AR smart lens manufacturer, has been met with both significant hype and equally significant challenges. While the enthusiasm about this company has been reasonably justified (a holographic real-time display in your field of vision is admittedly a pretty cool idea), they still somehow managed to repeatedly fall short on expectations. There have been numerous problems from the get-go that can be blamed for holding them back, too.

What issues, you might be asking? Well, for instance, the price of getting your hands on a sweet share of these sci-fi specs was an exorbitant $999. And if you wanted to get properly fitted in them, you had to not only shell out those beaucoup dollars, you also had to pop into one of two of their only brick-and-mortar retail shops. Even lowering the price of their AR glasses (dubbed “Focals”) down to a mere $600 per pop couldn’t save North from floundering.

Their struggles gradually became public in an assortment of actions performed by the company. First, they laid off something like 150 of their current staff. Then it was brought to light that North secured $40 million in bridge financing to help them stay afloat. Their next step was to cut out the middleman (the retail shops) and take their business entirely online. And if that wasn’t enough, they then finally pulled Focals from their inventory, with a vow to roll out an even better product (Focals 2.0) sometime in 2020.

If you were wondering where this new and improved product was, then wonder no longer: it was never made. Perhaps coronavirus squashed operations. Maybe North couldn’t drum up any more capital for their product. Either way, it was obvious that they needed another major bailout…and we now know that their much-needed helping hand has come from an unexpected place. In an announcement this week, Google has revealed that they have acquired this flailing AR tech company, and the two companies now plan to join heads to potentially (finally!) see this project through.

Google themselves are no stranger to AR, and many people may recall their attempts to get their own AR smart lenses (called “Google Glass”) up and running. Like Focals, though, the company simply couldn’t gain enough traction for Glass to become a popular product from the tech giant. While Google Glass is still available for purchase, it never became the mainstream tech revolution that Google had hoped it would be.

It’s exciting to see these two augmented reality greats come together with a unified goal in mind. After all, they already have a lot in common, with both companies serving as notable innovation masterminds, highly capable of designing and creating impressive AR technology. With that said, it’s still unclear what Google plans to do with its new purchase. Details of the acquisition are understandably hush-hush, and it’s been reported that all evidence of the first-gen of Focals will be scrubbed from the app store by the end of July 2020.

Perhaps this merger will finally allow us to see the much-anticipated Focals 2.0 come to life. Who knows? We eventually got to see the Miami Marlins not only become an actual baseball team, but also win the World Series (not once, but twice!). So is it that much more of a leap to also expect to see affordable holographic displays in our visual field? It’s an intriguing premise, and one that’s exciting to consider. Heck, we’re right there on the cusp of having real-deal hoverboards, too, so maybe this new version of augmented reality can finally become a true reality, as well.

Continue Reading

Tech Gadgets

Google Glass didn’t succeed, but Apple’s AR glasses might

(TECH GADGETS) Apple Glass: Are AR glasses gimmicky, or can Apple improve where Google failed? The potential is enormous, but can Apple meet the expectations?

Published

on

Apple AR glasses

Apple may announce a new addition to the iFamily this year: Apple Glass, a set of AR glasses to complement existing Apple products. Even though we’ve seen this story before, here’s why Apple’s rumored eyewear might deserve your attention–if not your money.

This certainly isn’t the first time a technology company has taken their brand name and slotted the word “Glass” after it to create hype. In 2015, Google Glass was discontinued–quite publicly, in fact–due to a variety of issues, chief among which were privacy concerns, and an untenable price tag of around $1500. Lacking a clear market and suitable demand, the shades were put to rest, though it should be noted that a rebranded version is available now (for $999).

Apple is a company that has, in the past, showed a propensity for iteration rather than innovation; the Apple Watch, while a stylish and functional improvement on existing wearable technology, wasn’t even close to the first of its kin, and early versions of the iPad were scrutinized against similarly sized, lower-priced counterparts. This isn’t to say that Apple doesn’t do tech better–just that they are, often enough, pretty late to the party.

In the case of AR glasses, this is a habit that may suit Apple well.

Put bluntly, there isn’t a clearly established need for smart glasses, and while critics of the Apple Watch were quick to say the same thing about that implement, anyone who has worn one for a few hours can recognize (if not fully appreciate) the handiness–no pun intended. It seems fair to afford Apple some grace with this in mind, but the fact remains that the demand for a set of AR glasses simply isn’t there for now.

On the other hand (again, no pun intended), Apple is the master of creating demand and hype where previously there was naught but slumber. For this reason, it behooves us to keep an eye on Apple’s unveiled tech this year–if for no other reason than to know for sure how the company plans to address the sticky issue of AR wearables.

After all, there are numerous medical, exploratory, and generally functional applications for which one could feasibly use AR in a beneficial (not gimmicky) manner, and if Apple is able to expedite that process, far be it from us to criticize. Yet.

Continue Reading

Tech Gadgets

The Apple Watch isn’t just a way to ignore calls, it could save your life

(TECH GADGETS) A lot of people balked at the idea of an Apple Watch, and even though many of its features seem superfluous, it has actually saved lives.

Published

on

Apple Watch

Apple products are known for invasive yet convenient features–Face ID, Keychain, and AirDrop being some of the more notable ones–but the Apple Watch emergency dial feature might be the most useful one of them all.

If you’ve had the pleasure of setting up an Apple Watch from scratch, you know that the Healthcare app asks some invasive questions. This app, among other things, is responsible for curating a list of emergency contacts (something you can also populate via the Contacts app on your iPhone)–and this list might save your life if you take an unexpected tumble, at least if you have a Series 4 or 5 watch.

The way the feature works is relatively simple: If the watch senses that a user has rapidly or heavily fallen, it will initiate a haptic pulse along with a message asking the user to confirm that they are okay. Should the user fail to address this notification, the watch will call emergency services–and the user’s emergency contact list–with details including the user’s GPS coordinates.

The fall detection feature has reportedly worked for a few Apple Watch owners, one of whom passed out and didn’t wake up until emergency services arrived.

It is worth noting that the Apple Watch has another potentially life-saving feature: an ECG attached to the Heart Rate app. In theory, the Heart Rate app can detect abnormalities in one’s heartbeat and warn the user of an impending issue such as a stroke or a heart attack. Anyone who owns an Apple Watch knows that the Heart Rate app can be finicky, but Apple seems likely to continue tweaking this app as the watch ages.

While several owners have publicly attested to the effectiveness of these features, this shouldn’t be taken as an endorsement of the Apple Watch’s ability to save a life. An Apple Watch is still, first and foremost, a novelty–one that won’t always perform the way it’s meant to.

Future iterations of the watch–starting with the Series 6–are expected to expand on these medical features by adding monitoring for blood oxygen levels as well as improvements on existing features.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!