You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Randy_M comments on [LINK] Why I'm not on the Rationalist Masterlist - Less Wrong Discussion

21 Post author: Apprentice 06 January 2014 12:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (866)

You are viewing a single comment's thread. Show more comments above.

Comment author: Randy_M 08 January 2014 02:26:04PM *  0 points [-]

Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?

Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don't find them to map too closely to actual situations.

I'm not sure I can aptly articulate by intuition here. By differences in values, I don't really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.

But I'll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.