-
Surprise surprise,
-
just today after Microsoft's
new Artificial Intelligence,
-
Tay launched on several social platforms,
-
it was corrupted by the internet.
-
If you haven't heard of Tay,
-
it's a machine learning
project created by Microsoft,
-
that's supposed to mimic the personality
-
of a 19 year old girl.
-
It's essentially an
instant messaging chat bot
-
with a bit more smarts built in.
-
Those smarts give Tay the ability to learn
-
from the conversations
she has with people,
-
that's where the
corruption comes into play.
-
As surprising as it may sound,
-
the company didn't have the foresight
-
to keep Tay from learning
inappropriate responses.
-
Tay ended up sending out racial slurs,
-
denying the Holocaust,
expressing support for genocide
-
and posting many other
controversial statements.
-
Microsoft eventually deactivated Tay,
-
the company told Tech Crunch,
-
once it discovered a coordinated effort
-
to make the AI project
say inappropriate things,
-
it took the program offline
to make adjustments.
-
Seasoned internet users among
us are none too surprised
-
by the unfortunate turn of events.
-
If you don't program in fail safes,
-
the internet is going to
do its worst, and it did.
-
In fact, the Guardian sites Godwin's law,
-
which holds the longer an
online discussion goes on,
-
the more likely it is that
someone will compare something
-
to Hitler or the Nazis.
-
As a writer for Tech Crunch put it,
-
while technology is neither good nor evil,
-
engineers have a responsibility
-
to make sure it's not designed
-
in a way that will reflect
back the worst of humanity.
-
You can't skip the part
about teaching a bot
-
what not to say.
-
For Newsy, I'm Mikah Sergeant.
-
[upbeat music]