simon2 comments on Three Worlds Decide (5/8) - Less Wrong

24 Post author: Eliezer_Yudkowsky 03 February 2009 09:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

Sort By: Old

You are viewing a single comment's thread.

Comment author: simon2 03 February 2009 06:40:00PM 0 points [-]

... but relative to simply cooperating, it seems a clear win. Unless the superhappies have thought of it and planned a response.

Of course, the corollary for the real world would seem to be: those people who think that most people would not converge if "extrapolated" by Eliezer's CEV ought to exterminate other people who they disagree with on moral questions before the AI is strong enough to stop them, if Eliezer has not programmed the AI to do something to punish that sort of thing.

Hmm. That doesn't seem so intuitively nice. I wonder if it's just a quantitative difference between the scenarios (eg quantity of moral divergence), or a qualitative one (eg. the babykillers are bad enough to justifiably be killed in the first place).