Konkvistador comments on Is friendly AI "trivial" if the AI cannot rewire human values? - Less Wrong

-5 Post author: Alerus 09 May 2012 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 10 May 2012 05:50:53PM *  1 point [-]

For the Poles at least I fear it probable not many would be around say 20 years after victory.

Comment author: Alerus 10 May 2012 08:06:55PM 0 points [-]

And as for the others? Or are you saying the AI trying to maximize well-being will try and succeed in effectively wiping out everyone and then condition future generations to have the desired easily maximized values? If so, this behavior is conditioned on the idea that the AI could be very confident in its ability to do so, because otherwise the chance of failing and the cost of war in expected value of human well-being would massively drop the expected value. I think you should also make clear what you think these values might end up being to which it will try to change human values to more easily maximize.