Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

harshhpareek comments on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? - Less Wrong

1 Post author: InquilineKea 26 March 2016 03:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread.

Comment author: harshhpareek 27 March 2016 02:55:25AM *  2 points [-]

There is an opinion expressed here, that I agree with: http://smerity.com/articles/2016/tayandyou.html TL;dr: No "learning" from interactions on twitter happened. The bot was parroting old training data, because it does not really generate text. The researchers didn't apply an offensiveness filter at all.

I think this chat bot was performing badly right from the start. It would not make sense to give too much importance to the users it was chatting with, and they did not change its mind. That bit of media sensationalism is BS. Natural language generation is an open problem and almost every method I have seen (not an expert in NLP, but would call myself one in Machine Learning) ends up parroting some of its training text, implying that it is overfitting.

Given this, we should learn nothing about AI from this experiment, only about people's reaction to it, mainly the media reaction to it. Users' reaction while talking to AI is well documented.