I wonder.
It seems like something that could be easily anticipated, and even tested for.
Yet a lot of people just don't take a game theoretic look at problems, and have a hard time conceiving of people with different motivations than they have.
It seems like something that could be easily anticipated, and even tested for.
Do anticipate what happened to the bot it would be necessary to predict how people interact with him. How the 4chan crowd interacted with it. That seems hard to test beforehand.
http://www.wired.com/2016/03/fault-microsofts-teen-ai-turned-jerk/
Could this be a lesson for future AIs? The AI control problem?
[future AIs may be shutdown, and matyred..]