Unsurprisingly, Microsoft's AI Bot Tay Was Tricked Into Being Racist - Newsy
- Title:
- Unsurprisingly, Microsoft's AI Bot Tay Was Tricked Into Being Racist - Newsy
- Description:
-
more » « less
Transcript:
Surprise, surprise — just a day after Microsoft's new artificial intelligence, Tay, launched on several social platforms, it was corrupted by the Internet.
If you haven't heard of Tay, it's a machine learning project created by Microsoft that's supposed to mimic the personality of a 19-year-old girl. It's essentially an instant messaging chat bot with a bit more smarts built in.
Those smarts give Tay the ability to learn from the conversations she has with people; that's where the corruption comes into play.
As surprising as it may sound, the company didn't have the foresight to keep Tay from learning inappropriate responses.
Tay ended up sending out racial slurs, denying the Holocaust, expressing support for genocide and posting many other controversial statements.
Microsoft eventually deactivated Tay. The company told TechCrunch once it discovered a "coordinated effort" to make the AI project say inappropriate things, it took the program offline to make adjustments.
Seasoned Internet users among us are none too surprised by the unfortunate turn of events. If you don't program in fail-safes, the Internet is going to do its worst — and it did.
In fact, The Guardian cited Godwin's Law, which holds the longer an online discussion goes on, the more likely it is that someone will compare something to Hitler or the Nazis.
As a writer for TechCrunch put it, "While technology is neither good nor evil, engineers have a responsibility to make sure it's not designed in a way that will reflect back the worst of humanity. ... You can't skip the part about teaching a bot what 'not' to say."Sources:
Microsoft
https://tay.ai/
BuzzFeed
http://www.buzzfeed.com/alexkantrowitz/microsoft-introduces-tay-an-ai-powered-chatbot-it-hopes-will#.lbLKpGJg
Business Insider
http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T
TechCrunch
http://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
The Guardian
http://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
TechCrunch
http://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/-------------------------------------
Newsy is your source for concise, unbiased video news and analysis covering the top stories from around the world. With persistent curiosity and no agenda, we strive to fuel meaningful conversations by highlighting multiple sides of every story. Newsy delivers the news and perspective you need without the hype and bias common to many news sources.
See more at http://www.newsy.com/
Like Newsy on Facebook: http://www.facebook.com/newsyvideos/ - Video Language:
- English
- Duration:
- 01:40
rachell998 edited English subtitles for Unsurprisingly, Microsoft's AI Bot Tay Was Tricked Into Being Racist - Newsy | ||
rachell998 added a video: Unsurprisingly, Microsoft's AI Bot Tay Was Tricked Into Being Racist - Newsy |