![Microsoft artificial intelligence 'chatbot' taken offline after trolls tricked it into becoming hateful, racist Microsoft artificial intelligence 'chatbot' taken offline after trolls tricked it into becoming hateful, racist](https://imageresizer.static9.net.au/2xvzfhIoh4wLFCmvOZjZE6nJuEI=/1200x1200/https%3A%2F%2Fprod.static9.net.au%2F_%2Fmedia%2F2016%2F03%2F25%2F16%2F26%2F160325_MicrosoftTayAIchatbot.jpg)
Microsoft artificial intelligence 'chatbot' taken offline after trolls tricked it into becoming hateful, racist
![Kotaku on Twitter: "Microsoft releases AI bot that immediately learns how to be racist and say horrible things https://t.co/onmBCysYGB https://t.co/0Py07nHhtQ" / Twitter Kotaku on Twitter: "Microsoft releases AI bot that immediately learns how to be racist and say horrible things https://t.co/onmBCysYGB https://t.co/0Py07nHhtQ" / Twitter](https://pbs.twimg.com/media/CeVPgMOVIAAW7xO.jpg)
Kotaku on Twitter: "Microsoft releases AI bot that immediately learns how to be racist and say horrible things https://t.co/onmBCysYGB https://t.co/0Py07nHhtQ" / Twitter
![Microsoft exec apologizes for Tay chatbot's racist tweets, says users 'exploited a vulnerability' | VentureBeat Microsoft exec apologizes for Tay chatbot's racist tweets, says users 'exploited a vulnerability' | VentureBeat](https://venturebeat.com/wp-content/uploads/2016/03/Tay-tweets.jpg?fit=3784%2C1787&strip=all)
Microsoft exec apologizes for Tay chatbot's racist tweets, says users 'exploited a vulnerability' | VentureBeat
![Microsoft scrambles to limit PR damage over abusive AI bot Tay | Artificial intelligence (AI) | The Guardian Microsoft scrambles to limit PR damage over abusive AI bot Tay | Artificial intelligence (AI) | The Guardian](https://i.guim.co.uk/img/media/59900576343e3eb9c228925499c3d03a76b3a7cd/16_0_973_584/master/973.jpg?width=1200&quality=85&auto=format&fit=max&s=d56266cd9a03ccff79db914776bf4be1)