They are mimicking human behavior, after all…
The story of Tay the chat bot sounds like a film on the SciFi channel: Microsoft engineers create an advanced Artificial Intelligence (AI) fem-bot and turn it loose on an unsuspecting world only to find out how much of an influence society has on the bot’s mimicking skills. Next thing you know the bot is spitting out profanity and short-circuits itself on this mega-dose of reality. Okay, not exactly but that’s pretty much the script.
I think we all know the story: Tay was a chat Bot designed to target 18 to 24 year olds in the U.S. and was built on a foundation of aggregated public data, About 16 hours into Tay’s first day on the job; she was shut down due to her inability to interpret incoming data as racist or offensive.
Does not compute
So what makes good bots go bad? Aren’t they programmed to do what we tell them to? Well here’s the deal: not only are bots able to mimic human behavior but in Tay’s case she could distinguish right from wrong. According to an article on Tech Republic, What Tay was not equipped with were “Safeguards against good and bad.” Adding to that, Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville, explained that, “The system is designed to learn from its users, so it will become a reflection of their behavior.”
In other words, Tay has no idea what it is saying. It has no idea if it’s saying something offensive, or nonsensical, or profound.
Danger, Will Robinson
For its part, Microsoft stressed that, “We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.”
Only instead, Tay got worse and worse.
What’s crazy is that Tay was not Microsoft’s first attempt at AI released into the online social world. Again, referring to Microsoft’s blog and the apology contained therein, Microsoft’s XiaoIce chatbot is being used by some 40 million people in China, delighting with its stories and conversations. The great experience with XiaoIce led Microsoft techies to wonder: “Would an AI like this be just as captivating in a radically different cultural environment?”
The answer obviously was no!
Writing is on the wall
Finally, according to an article on EMarketer, Tay’s problems certainly had to do with issues in code and scripting. Was it Java’s fault? Many programmers agree that “There is no good way to write substantial software in JavaScript.” These things frustrate professional and experienced programmers, since they are simply not used to writing in functional languages. How many programmers that have a professional education actually know how to do functional programming? And how many are good at it?
I don’t know. Why not ask Tay?
#WhenGoodBotsGoBad
Nearly three decades living and working all over the world as a radio and television broadcast journalist in the United States Air Force, Staff Writer, Gary Picariello is now retired from the military and is focused on his writing career.
