Nick_Tarleton comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 18 February 2010 12:01:02AM *  5 points [-]

We don't favor those values because they are the values of that subset — which is what "doing things to favor white people's values" would mean — but because we think they're right. (No License To Be Human, on a smaller scale.) This is a huge difference.

Comment author: wedrifid 18 February 2010 05:07:34AM *  3 points [-]

which is what "doing things to favor [group who shares my values] values" would mean — but because we think they're right.

Given the way I use 'right' this is very nearly tautological. Doing things that favour my values is right by (parallel) definition.

Comment deleted 18 February 2010 12:09:46AM [-]
Comment author: Clippy 18 February 2010 12:26:27AM 6 points [-]

Well, you shouldn't.

Comment author: Vladimir_Nesov 21 February 2010 10:35:38AM *  2 points [-]

Sure, we favor the particular Should Function that is, today, instantiated in the brains of roughly middle-of-the-range-politically intelligent westerners.

Do you think there is no simple procedure that would find roughly the same "should function" hidden somewhere in the brain of a brain-washed blood-thirsty religious zealot? It doesn't need to be what the person believes, what the person would recognize as valuable, etc., just something extractable from the person, according to a criterion that might be very alien to their conscious mind. Not all opinions (beliefs/likes) are equal, and I wouldn't want to get stuck with wrong optimization-criterion just because I happened to be born in the wrong place and didn't (yet!) get the chance to learn more about the world.

(I'm avoiding the term 'preference' to remove connotations I expect it to have for you, for what I consider the wrong reasons.)

Comment deleted 21 February 2010 01:20:09PM *  [-]
Comment author: CarlShulman 22 February 2010 10:14:14AM 1 point [-]

Haidt just claims that the relative balance of those five clusters differ across cultures, they're present in all.

Comment author: Vladimir_Nesov 21 February 2010 02:17:10PM *  1 point [-]

On one hand, using preference-aggregation is supposed to give you the outcome preferred by you to a lesser extent than if you just started from yourself. On the other hand, CEV is not "morally neutral". (Or at least, the extent to which preference is given in CEV implicitly has nothing to do with preference-aggregation.)

We have a tradeoff between the number of people to include in preference-aggregation and value-to-you of the outcome. So, this is a situation to use the reversal test. If you consider only including the smart sane westerners as preferable to including all presently alive folks, then you need to have a good argument why you won't want to exclude some of the smart sane westerners as well, up to a point of only leaving yourself.

Comment deleted 21 February 2010 04:47:26PM [-]
Comment author: Unknowns 24 February 2010 04:59:48AM 2 points [-]

I hope you realize that you are in flat disagreement with Eliezer about this. He explicitly affirmed that running CEV on himself alone, if he had the chance to do it, would be wrong.

Comment author: wedrifid 24 February 2010 06:29:35AM *  1 point [-]

Eliezer quite possibly does believe that. That he can make that claim with some credibility is one of the reasons I am less inclined to use my resources to thwart Eliezer's plans for future light cone domination.

Nevertheless, Roko is right more or less by definition and I lend my own flat disagreement to his.

Comment author: Eliezer_Yudkowsky 24 February 2010 05:41:09AM 1 point [-]

Confirmed.

Comment author: Vladimir_Nesov 21 February 2010 05:15:56PM *  1 point [-]

"Low probability of success" should of course include game-theoretic considerations where people are more willing to help you if you give more weight to their preference (and should refuse to help you if you give them too little, even if it's much more than status quo, as in Ultimatum game). As a rule, in Ultimatum game you should give away more if you lose from giving it away less. When you lose value to other people in exchange to their help, having compatible preferences doesn't necessarily significantly alleviate this loss.

Comment deleted 21 February 2010 05:28:05PM [-]
Comment author: Vladimir_Nesov 21 February 2010 06:56:49PM *  1 point [-]

I know about the ultimatum game, but it is game-theoretically interesting precisely because the players have different preferences: I want all the money for me, you want all of it for you.

Ultimatum game was mentioned primarily to remind that the amount of FAI-value traded for assistance may be orders of magnitude greater than what the assistance feels to amount to.

We might as well have as a given that all the discussed values are (at least to some small extent) different. The "all of money" here are the points of disagreement, mutually exclusive features of the future. But you are not trading value for value. You are trading value-after-FAI for assistance-now.

If two people compete for providing you an equivalent amount of assistance, you should be indifferent between them in accepting this assistance, which means that it should cost you an equivalent amount of value. If Person A has preference close to yours, and Person B has preference distant from yours, then by losing the same amount of value, you can help Person A more than Person B. Thus, if we assume egalitarian "background assistance", provided implicitly by e.g. not revolting and stopping the FAI programmer, then everyone still can get a slice of the pie, no matter how distant their values. If nothing else, the more alien people should strive to help you more, so that you'll be willing to part with more value for them (marginal value of providing assistance is greater for distant-preference folks).

Comment deleted 21 February 2010 08:21:51PM *  [-]
Comment deleted 21 February 2010 01:04:11PM [-]
Comment author: Vladimir_Nesov 21 February 2010 02:04:09PM *  2 points [-]

But that works the other way around too. Somewhere hidden in the brain of a a liberal western person is a murderer/terrorist/child abuser/fundamentalist if you just perform the right set of edits.

Again, not all beliefs are equal. You don't want to use the procedure that'll find a murderer in yourself, you want to use the procedure that'll find a nice fellow in a murderer. And given such a procedure, you won't need to exclude murderers from extrapolated volition.

Comment author: Nick_Tarleton 18 February 2010 12:16:40AM *  2 points [-]

You seem uncharacteristically un-skeptical of convergence within that very large group, and between that group and yourself.

Comment deleted 18 February 2010 12:24:06AM *  [-]
Comment author: wedrifid 18 February 2010 05:13:59AM *  1 point [-]

Though, there are some scenarios where there would be divergence.

For example: All your stuff should belong to me. But I'd let you borrow it. ;)

Comment author: hal9000 18 February 2010 08:06:36PM -1 points [-]

Okay. Then why don't you apply that same standard to "human values"?

Comment author: Nick_Tarleton 18 February 2010 08:26:07PM 0 points [-]

Did you read No License To Be Human? No? Go do that.