Tech is on the up and up
It feels almost redundant to say we live in an age of unprecedented technological growth. I mean just look at this stuff – a computer crafted an actual organism, a cell phone as a microscope, a robot that can run and be used by the military, a helium-filled 6TB hard drive, open-source industrial machines, and a syringe with tiny sponges that can seal a gunshot wound in seconds.
But there are concerns
I don’t blame them. Neither does Dr. Guru Banavar, Chief Science Officer for Cognitive Computing at IBM who recently wrote on the subject. “The most urgent work is to recognize and minimize bias. Bias could be introduced into an AI system through the training data or the algorithms,” he notes.
He’s the head of cognitive computing at IBM, so he’s kind of got a dog in this hunt. Some other really smart people – Bill Gates, Stephen Hawking and Elon Musk, for example, three dudes to whom it often pays to listen – are saying the opposite.
OK, I don’t hear Microsoft office drones or Musk-branded actual drones coming to get me. But maybe they just blue-screened and need a reboot before they march down my street like Cybermen and arrest me for heresy, so let’s get serious.
Coal, steel, and concrete
Ironically, the answer to the question, “Will Skynet kill us all?” lies not in the eternal, shiny and chrome future, but in history. When I said tech has been exponentially improving for generations, it wasn’t hyperbole. It was math. Human life has changed more in the last 300 years than in the twenty thousand beforehand, when we figured out putting seeds in the ground makes them do stuff. The Industrial Revolution never really ended. The combination of Newton’s rigor and Watt’s engineering that remade the mostly agrarian world with coal and steel and concrete, is still remaking the world, still mostly with coal and steel and concrete.
Worse before it gets better
AI is going to be another big change. Industrial Revolution big? Dammit, Jim, I’m a writer, not an oracle. But I’m putting my money behind Dr. Banavar rather than the Three Wise Geeks, because this time we have an unprecedented advantage: 300 years of our ancestors screwing up. London had two million people in it before it had sewers. That led directly, and unsurprisingly, to the germ theory of disease, which in turn led to not dying of tooth decay. I am in favor of not dying of tooth decay.
The only reason AI is a thing is because the great pre-AI paradigm shift was an immeasurably vast increase in the availability of data. That means that this time, we have a chance of seeing the consequences coming. Thanks both to pro-AI scholars like Dr. Banavar and AI skeptics like Dr. Hawking, the implementation of AI could be something new: a conscious revolution.
After all, we were afraid of this change decades before it came.
Alan Turing, founding father of computer science took on the philosophical tangles of AI all of two years after the first stored-program computer was created, and Hubert Dreyfus – not to mention HAL and Superman – addressed the fears and failures of artificial intelligence when it was still a tall ask to get a computer in one room.
Or will we triumph?
When Thomas Newcomen set his piston bouncing, he had no idea he’d started the Industrial Revolution. He was just trying to dry out a mine. There wasn’t an angel on his shoulder whispering “Hey, before you turn on your engine, have you considered it might cause a cholera outbreak in London and the subsequent founding of epidemiology?”
We’ve got the angel, in the form of a wealth of opinions on what machine learning should and should not do. We’ve traced the lines of dominoes back from the triumphs and tragedies of world history. AI represents a chance at Revolution Mark 2, change guided from “go” by human interests.
Though who knows? Maybe I’m with the Cybermen.