A game called Go
There’s a game called go. For lack of a better word, it’s perfect. All you need is a grid, and stones in two colors. You play a stone at an intersection on the grid. When you place stones of your color on all the intersections surrounding a given space, it’s your space, and your opponent can’t play there, even if they have stones there already. The goal is to surround more space than your opponent.
It’s simple enough to teach a toddler, and so complex there are more possible games than atoms in the universe. It’s 3,000 years old, and it’s a mathematical certainty there have never been two identical games. It’s as much intuition as intellect: play styles are so indicative of personality that companies have been known to use the game in job interviews. Go is one of the masterpieces of humanity, a microcosm of the H. sapiens mind.
This May, a computer program called AlphaGo beat the best player alive. I’m not here to talk about go. I’d like to talk about your car.
Your car, but not
More specifically, I’m here to talk about the robot apocalypse as it relates to your car, because a) the robot apocalypse is my jam; b) it has a surprising amount to do with a 3,000 year old board game, and the fact that Google can beat us at it.
Modulo singularity, computers can’t do anything we don’t tell them to. The power of AI is that we’ve told them, more or less, to think.
The danger of AI is that we haven’t settled on standards of what they are and aren’t allowed to think about.
That brings us to your car, because, in real terms, it’s not your car. Even if you own it, straight cash, your insurance company has a financial stake in it. Repair companies have to make pricing decisions for it. Governments have to regulate what you do with it, because you can use it to kill people.
Every one of those tasks gets easier with AI.
One little robot in your car, doing useful things for you from setting appointments to setting the AC, and suddenly the government gets to know when you ran over those nuns, the mechanic gets to know that’s where the dents in the bodywork came from, the insurance company gets to know it was your fault.
Maybe too smart?
The benefits of AI-enabled “smart cars” are myriad. I’m genuinely, personally psyched about it. That little robot promises to be a present help with everything from GPS to streaming media.
But the plain fact is, any smart car (not *that* kind of Smart Car) is going to be collecting data for the benefit of people other than you.
As we’ve covered in the past, AI is clueless about context.
Unless it’s told otherwise, it won’t know the difference between a hard brake to save a fluffy squirrel, and a pause to twirl your mustache before barreling down with malice aforethought on the Sisters of the Sacred Heart.
Those pesky insurance premiums
So is the insurance company, or the dealership, or the mechanic gonna tell it otherwise? More importantly, are they gonna tell it otherwise if it isn’t specifically delineated as their job to do so? Because, at the risk of cynicism, failing to tell the AI the difference between squirrel-saving and rank villainy is a really good way to jack up insurance premiums.
Worse, albeit ethically better, what about an insurance company acting in good faith to remove the random human element from at-fault assessment, thereby ceding it to something that’s literally incapable of making subjective decisions?
Remember that game from earlier?
Which is what brings us around to go. AlphaGo was a Google project and like I said, it beat the greatest player alive at the greatest game ever made. Know how? Layers. It was built on the DeepMind framework, which consists (massive oversimplification incoming!) of a set of tasks, each programmed to do one thing, running simultaneously.
This one assesses how many moves have been made in a given area recently, providing data on which parts of the board are contested.
That one is a memory algorithm, going back through previous games and identifying similar situations. The other parses the memory data for moves mathematically compatible with the current board. And so on.
It better compute
It works. It’s brilliant. But it works because it’s incredibly fast and incredibly efficient at moving data between those processes. It can’t generate new ones. It’s still only an approximation of a human mind, a box of switches arranged in a brilliant order by brilliant people. Nuance is not an option.
As long as that’s the case, which is to say, until someone builds a computer that’s also a person, every implementation of AI will have the option of “does not compute.” Smart cars are in the immediate future, and in the short term “does not compute” is likely to mean jacked up premiums, jacked up prices, and, for the self-driving crowd, quite possibly jacked up rides. Drive carefully.