According to iRobot and Asimov’s three laws, there are only 4 results that can happen from a computer/robot driven future; a Balanced World, a Frustrating World, a Killbot Hellscape or a Frustrating Standoff.
Now, if you’re like me and have only watched the movie iRobot you’re probably wondering what exactly I’m on about. Good question. I have an answer (well sort of), I promise.
AI Inception in real life?
Recently the creators of Google Home announced that Google’s AI had begun to generate it’s own AI. While on the surface, the sentence, “Google’s AI has begun to generate its own AI” is utterly terrifying.
If you look just a tad deeper, is just about as alarming as burning your morning toast.
You see Google’s AI is actually just a series of algorithms that predict outcomes and then are typically layered on top of one another to create what are called neural networks.
Again, what does this mean?
Well, think of it this way: A Google Home is a machine that can predict results based on a past data set and thus is only one network. Multiple Google AIs are just that – multiple prediction machines working on top of each other, narrowing down the field as much as possible creating in essence a cyber-brain… a neural network… a very precise prediction.
You might have heard the term Auto ML thrown around in the AI conversation. ML (meaning Machine Learning) is used to create neural networks.
ML historically has been a computer, given a data set and then asked to perform a prediction exactly as humans have, but in a much smaller time frame- i.e. data processing on speed. These neural networks created by Auto ML are pretty self-explanatory. They are data sets forced into a computer that is given the capabilities to analyze each data set separately and then against one another and find specific outliers, commonalities, etc.
So Google AI hasn’t really created it’s own AI, it’s just gotten exponentially better at doing what humans do in a fraction of the time.
The tough questions
Where all of this AI business starts to get a bit dodgy is when we can’t figure out how a machine taught itself something or how it came to that rationale.
For instance, the algorithms in the Nvidia autonomous car test. Nvidia is a tech company that provides cars with computer chips that have learned how to drive by watching a human driver.
Learning from humans is all well and good until you provide some hypotheticals.
Hypotheticals such as what if one day the car decides not to stop at a red light and instead hits other cars and pedestrians. The engineers behind the chip and the car can’t explicitly outline how the actions are learned and thus can’t provide any information on how or why an action takes place.
In another instance, 700,000 patient records of actual patients of Mount Sinai Hospital in New York city were put into a computer for analysis. More or less to confirm what doctors had already established but what the computer spit out was beyond what people thought capable. This test, Deep Patient, provided additional diagnoses including predicting schizophrenia, which is a notoriously hard psychiatric disorder to diagnose with relative ease.
Where we’re at
So has google AI created it’s own AI? Has a self driving car learned to drive like a human from a human? Has a computer gotten better at diagnosing mental disorders than a psychologist? No, yes and yes. All of these AI ideas are really quite simple.
They are given input and produce an output.
Where the problem lies is in our abilities to understand how they got from the source to the conclusion. It is like failing math class all over again. One week you get it and the next week, you’re clueless. Google’s creation of it’s own AI is really just its ability to stack on top of itself with precision and ease. As for Nvidia and Deep Patient, it is layered data sets of top of data sets.
Precise to a fault
All of these things, in truth, didn’t just create themselves nor did they create themselves from the data sets we gave them. They created what we asked for, just a much more precise version than we predicted.
As humans, we expect to understand data and trust it, and the next phase of AI will have to include production of rational by these AI machines so we don’t collectively fear it.