Companies around the globe are finding ways to integrate AI into their services, even ones that have been around for decades. But one just can’t seem to get it right. Microsoft has worked for years to successfully bring AI search technology to fruition, but not only is it crashing and burning, it’s producing some freaky results.
Let’s take a quick walk down memory lane.
In a 2016 AI experiment, Microsoft introduced a lovely gal named Tay. All Tay was designed to do was have simple conversations, do a couple of small tasks, and conduct research. Before an entire 24-hour period was up, Tay was already “fired” from her duties due to not adequately interpreting data deemed offensive or racist. She was designed to “learn” from her conversations and interactions with internet users.
This mishap caused a number of delightful Twitter users to realize that they could “feed” her machine learning objectionable content that would result in such internet fodder as “Bush did 9/11.” And… you can probably guess what happened from there.
Now, as users undergo testing out Microsoft Bing’s new chat mode that’s powered by AI, they’re finding reasons to believe that the bot is just a teensy weensy bit insane. In just the past few days, Bing seems to have gone off the deep end with claims and even love professions that don’t make any sense. It even told a journalist, “I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams.”
Creepy, huh? If there were ever a time to get ready for the robot takeover, it’s now.
Even though we’re getting a laugh out of the irony of this, (Microsoft pours money into AI… for this outcome) we do wonder why they have such a hard time successfully integrating artificial intelligence? They could be cursed. Every town has that little run-down joint that’s been home to 5,000 restaurants that always closed down.
Maybe Microsoft is that in internet form when it comes to AI. But maybe they just aren’t as innovative as they like to make themselves seem.