Roko comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 21 February 2010 02:17:10PM *  1 point [-]

On one hand, using preference-aggregation is supposed to give you the outcome preferred by you to a lesser extent than if you just started from yourself. On the other hand, CEV is not "morally neutral". (Or at least, the extent to which preference is given in CEV implicitly has nothing to do with preference-aggregation.)

We have a tradeoff between the number of people to include in preference-aggregation and value-to-you of the outcome. So, this is a situation to use the reversal test. If you consider only including the smart sane westerners as preferable to including all presently alive folks, then you need to have a good argument why you won't want to exclude some of the smart sane westerners as well, up to a point of only leaving yourself.

Comment deleted 21 February 2010 04:47:26PM [-]
Comment author: Unknowns 24 February 2010 04:59:48AM 2 points [-]

I hope you realize that you are in flat disagreement with Eliezer about this. He explicitly affirmed that running CEV on himself alone, if he had the chance to do it, would be wrong.

Comment author: wedrifid 24 February 2010 06:29:35AM *  1 point [-]

Eliezer quite possibly does believe that. That he can make that claim with some credibility is one of the reasons I am less inclined to use my resources to thwart Eliezer's plans for future light cone domination.

Nevertheless, Roko is right more or less by definition and I lend my own flat disagreement to his.

Comment author: Eliezer_Yudkowsky 24 February 2010 05:41:09AM 1 point [-]

Confirmed.

Comment author: Vladimir_Nesov 21 February 2010 05:15:56PM *  1 point [-]

"Low probability of success" should of course include game-theoretic considerations where people are more willing to help you if you give more weight to their preference (and should refuse to help you if you give them too little, even if it's much more than status quo, as in Ultimatum game). As a rule, in Ultimatum game you should give away more if you lose from giving it away less. When you lose value to other people in exchange to their help, having compatible preferences doesn't necessarily significantly alleviate this loss.

Comment deleted 21 February 2010 05:28:05PM [-]
Comment author: Vladimir_Nesov 21 February 2010 06:56:49PM *  1 point [-]

I know about the ultimatum game, but it is game-theoretically interesting precisely because the players have different preferences: I want all the money for me, you want all of it for you.

Ultimatum game was mentioned primarily to remind that the amount of FAI-value traded for assistance may be orders of magnitude greater than what the assistance feels to amount to.

We might as well have as a given that all the discussed values are (at least to some small extent) different. The "all of money" here are the points of disagreement, mutually exclusive features of the future. But you are not trading value for value. You are trading value-after-FAI for assistance-now.

If two people compete for providing you an equivalent amount of assistance, you should be indifferent between them in accepting this assistance, which means that it should cost you an equivalent amount of value. If Person A has preference close to yours, and Person B has preference distant from yours, then by losing the same amount of value, you can help Person A more than Person B. Thus, if we assume egalitarian "background assistance", provided implicitly by e.g. not revolting and stopping the FAI programmer, then everyone still can get a slice of the pie, no matter how distant their values. If nothing else, the more alien people should strive to help you more, so that you'll be willing to part with more value for them (marginal value of providing assistance is greater for distant-preference folks).

Comment deleted 21 February 2010 08:21:51PM *  [-]
Comment author: Vladimir_Nesov 21 February 2010 09:06:36PM *  2 points [-]

You don't include cultures in CEV, you filter people through extrapolation of their volition. Even if culture makes value different, "mutilating women" is not a kind of thing that gets through, and so is a broken prototype example for drawing attention to.

In any case, my argument in the above comment was that value should be given (theoretically, if everyone understands the deal and relevant game theory, etc., etc.; realistically, such a deal must be simplified; you may even get away with cheating) according to provided assistance, not according to compatibility of value. If poor compatibility of value prevents from giving assistance, this is an effect of value completely unrelated to post-FAI compatibility, and given that assistance can be given with money, the effect itself doesn't seem real either. You may well exclude people of Myanmar, because they are poor and can't affect your success, but not people of a generous/demanding genocidal cult, for an irrelevant reason that they are evil. Game theory is cynical.

Comment deleted 21 February 2010 11:10:55PM [-]
Comment author: Vladimir_Nesov 21 February 2010 11:51:06PM 1 point [-]

how do you know? If enough people want it strongly enough, it might.

How strongly people want something now doesn't matter, reflection has the power to wipe current consensus clean. You are not cooking a mixture of wants, you are letting them fight it out, and a losing want doesn't have to leave any residue. Only to the extent current wants might indicate extrapolated wants, should we take current wants into account.

Comment author: Kevin 21 February 2010 11:28:38PM 0 points [-]

Might "mutilating men" make it through?

(sorry for the euphemism, I mean male circumcision)