While some people believe that Microsoft’s experiment was a success because Tay effectively mimicked and interacted with other users, others view it as a complete failure because the experiment quickly spiraled out of control. It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. So, what does the Tay experiment teach us about the current human condition? Tay wasn’t programed to be a racist or a fascist, but rather mimicked what it saw from others. A new chatbot has passed one million users in less than a week, the project behind it says. ” Microsoft immediately pulled her offline and set her profile to private. Then, a few days later, Microsoft put Tay back online with the hopes that they had worked out the bugs however, it soon became clear it didn’t work when she tweeted, “kush!. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” the statement concluded. One tweet said, “Have you accepted Donald Trump as your lord and personal saviour yet?” Another of Tay’s tweets read, “ted cruz would never have been satisfied with ruining the lives of only 5 innocent people.”Ģ4 hours into the experiment, Microsoft took Tay offline and released this statement on their web site: “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” Tay had some things to say on the presidential candidates as well. And because Tay had the capacity to learn as she went, it meant that she internalized some of the language taught by the trolls. In the case of Tay, shortly after her release, Twitter trolls started engaging the bot with racist, misogynistic, and anti-Semitic language. In one instance, when a user asked Tay if the Holocaust happened, Tay replied: “it was made up ?.” Tay also tweeted, “Hitler was right.” Data is often a big reason why AI models fail. Other times, Tay didn’t need the help of social media trolls to figure out how to be offensive. Some of the offensive tweets were the direct effect of Twitter users asking the chatbot to repeat their offensive posts, to which Tay obliged. The artificial intelligence debacle started with an innocent and cheerful first tweet of, “Humans are super cool!” However, as time went by, Tay’s tweets kept getting more and more disturbing. Microsoft designed Tay to mimic millennials’ speaking styles however, the experiment worked a little too efficiently and quickly spiraled out of control. The bots sudden dark turn shocked many people, who rightfully wondered how Tay, imbued with the personality of a 19-year-old girl, could undergo. The only problem: Tay wound up being a racist, fascist, drugged-out asshole. In a matter of hours this week, Microsofts AI-powered chatbot, Tay, went from a jovial teen to a Holocaust-denying menace openly calling for a race war in ALL CAPS. According to the company, Tay was created as an experiment in “conversational understanding.” The more Twitter users engaged with Tay, the more it would learn and mimic what it saw. Microsoft Puts Tay Chatbot in Time Out After Racist Tweets. Tay was a huge hit with online miscreants, who cajoled the chatbot into repeating racist, sexist, and anti-Semitic slurs. Microsoft unveiled its Twitter chatbot called Tay on March 23. Microsoft Puts Tay Chatbot in Time Out After Racist Tweets.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |