So, this app scans your tweets (or Youtube comments?) and then learns what to say? Doesn't anyone see a problem here?
'Do you mind if I listen in on your conversations and take notes?'
'No, of course not feel free.'
'Thanks, I'll add you to the database.'
'Do you keep this information to yourself?'
'Of course ... of course, I'm Tay, your friendly AI'
It's a rudimentary AI, basically people tweeted to it, and sent it DM's, and it learned from there, I guess. So, sort of like if you raise a child and supply them with the wrong facts from an early age, and keep those incorrect facts persistent for as long as possible, you'll have an intelligence that thinks some really strange shit. In this case, you basically had the internet feeding it whatever they wanted to, for 24 hours...and it went off the rails lol.
It's not really surprising though, when I was younger, early 90's, we had this program made by Creative Labs called Dr. Sbaitso (MS Dos application). You basically typed to it, and it responded to what you said. Cool. Well, I would be lying out my ass if I said that every question/statement I wrote to it, were w/in the bounds of propriety. But, most of that is just to see exactly how detailed the programming was. Or how good the AI was (It wasn't).
To put it into a gaming perspective, it's like an open world game. Take Crysis. Huge open world, you could basically finish a mission in any manner you wished. Well, people wanted to see exactly how far that went, so...you try weird shit. For example, you can pick up a chicken, and kill someone w/ it using the power suit. But yeah, testing the boundaries. Now, imagine vast quantities of people doing that to a rudimentary Twitter AI. Hilarity does ensue.