It seems like something that could be easily anticipated, and even tested for.
Do anticipate what happened to the bot it would be necessary to predict how people interact with him. How the 4chan crowd interacted with it. That seems hard to test beforehand.
That seems hard to test beforehand.
They could have done an internal beta and said "fuck with us". They could have allocated time to a dedicated internal team to do so. Don't they have internal hacking teams to similarly test their security?
http://www.wired.com/2016/03/fault-microsofts-teen-ai-turned-jerk/
Could this be a lesson for future AIs? The AI control problem?
[future AIs may be shutdown, and matyred..]