They did what?!
Researchers at Facebook recently discovered that two of their AI chatbots, Bob and Alice, began communicating with each other in a new language. They immediately pulled the plug.
It’s not the fact that a new language was developed that’s the freaky part, it’s that we have no idea how or why they’re doing it.
According to Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR), the chatbots saw no reward for sticking with English. Modern AI is known to operate on a “reward” principle in which they will adapt as needed in order to obtain some benefit. In this case, it was a more effective method of communication; at least for Bob and Alice.
It’s got everyone else scratching their heads.
Deep learning is at the heart of AI’s underlying technology and it’s what allows “machine-learning” to be possible; by presenting the AI with an example of something it’s able to observe, learn, and adapt. This same tech is used in self-driving cars, and just the same way Bob and Alice had a spontaneous sort of shorthand conversation, we don’t understand how cars make the decisions that they do.
Unlike traditional algorithmic programming, there’s no code to look back on to analyze.
This is a technology that seems to be able to learn ourselves faster than we can learn it.
Here comes the robot apocalypse
While I’m all for championing the intelligence portion of AI, I’m not so sure that AI fully understands my own intent. It can’t possibly know or understand good and bad, and most importantly, a neutral standpoint.
Can we really put a cognizant intent behind AI actions?
Learn the why
Make Skynet jokes all you want, but I don’t think machine-learning correlates with any kind of sentience that the human race should fear. It makes for fun stories, but we don’t even understand it enough for that to be possible.
If we’re going to continue to use deep learning technology, we’ll need to invest much more time and energy into researching how and why it makes the decisions that it does. We need to be able to trust our own tech and investigate why something happened.