Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Robin_Hanson2 comments on Devil's Offers - Less Wrong

21 Post author: Eliezer_Yudkowsky 25 December 2008 05:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Robin_Hanson2 25 December 2008 04:25:12PM 3 points [-]

What is the point of trying to figure out what your friendly AI will choose in each standard difficult moral choice situation, if in each case the answer will be "how dare you disagree with it since it is so much smarter and more moral than you?" If the point is that your design of this AI will depend on how well various proposed designs agree with your moral intuitions in specific cases, well then the rest of us have great cause to be concerned about how much we trust your specific intuitions.

James is right; you only need one moment of "weakness" to approve a protection against all future moments of weakness, so it is not clear there is an asymmetric problem here.