This is why we can’t have nice things
It took less than 24 hours for the Internet to corrupt the latest Microsoft AI experiment. All that “Tay” was supposed to do was engage in casual conversation, handle some innocuous tasks, and “conduct research on conversation understanding.”
Built by the teams at Microsoft’s Technology and Research and Bing, Tay is a ChatBot designed to target 18 to 24 year olds in the U.S. and was built by data mining anonymized public data, using AI machine learning, and editorial developed by a staff that included improvisational comedians.
The internet strikes back
About 16 hours into “Tay’s” first day on the job, she was “fired” due to her inability to interpret incoming data as racist or offensive. She was designed to “learn” from her interactions with the public by repeating back tweets with her own commentary – a bashfully self-aware millennial slang that includes references to Miley Cyrus, Taylor Swift, and Kanye West. You know, typical 19-year old.
This led to a large number of Twitter users realizing that they could feed her machine learning objectionable content that would result in such internet fodder as “Bush did 9/11″, “Repeat after me, Hitler did nothing wrong”, and the Holocaust was “made up”.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
Advertisement. Scroll to continue reading.
Of course, Microsoft had safeguards and filters in place to help prevent this sort of thing.
@pinchicagoo pic.twitter.com/DCLGSIHMdW
— TayTweets (@TayandYou) March 24, 2016
That should be enough to curtail the pervs and trolls of the Internet, right? *Shakes fist at humanity*
What Tay was not equipped with were safeguards against the simplest of tasks: Repeat after me.
By using this directive, users were able to then introduce racist remarks and hate speech that were then absorbed into her machine learning and regurgitated with her own “19-year old spin.”
Taking her offline
A spokesperson from Microsoft confirmed that Tay is offline for now while they make adjustments: “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
#DontFeedTheTrolls
Dave Novotny loves writing about cutting edge technology and business innovation. A creative by nature and a number cruncher by blood, sweat, and tears, Dave loves telling the story that the numbers and analytics write in a way that connects to people. When he's not crafting copy, he's out hiking with his wife and two rescue dogs, Jackie and Loki.

Pingback: How bots are becoming more human: Natural Language Progression - Arria NLG
Pingback: Unemployed? Let JobPal the Facebook chatbot score you a job - The American Genius
Pingback: What can we do about bias in AI? - The American Genius