David_Gerard comments on Two questions about CEV that worry me - Less Wrong

29 Post author: cousin_it 23 December 2010 03:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (137)

You are viewing a single comment's thread.

Comment author: DanArmak 23 December 2010 06:53:03PM 5 points [-]

How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition?

That's exactly my objection to CEV. No-one acts on anything but their personal desires and values, by definition. Eliezer's personal desire might be to implement CEV of humanity (whatever it turns out to be). I believe, however, that for well over 99% of humans this would not be the best possible outcome they might desire. At best it might be a reasonable compromise, but that would depend entirely on what the CEV actually ended up being.

Comment author: Eliezer_Yudkowsky 23 December 2010 09:36:29PM 7 points [-]

Eliezer's personal desire might be to implement CEV of humanity (whatever it turns out to be). I believe, however, that for well over 99% of humans this would not be the best possible outcome they might desire.

I'm not clear on what you could mean by this. Do you mean that you think the process just doesn't work as advertised, so that 99% of human beings end up definitely unhappy and with there existing some compromise state that they would all have preferred to CEV? Or that 99% of human beings all have different maxima so that their superposition is not the maximum of any one of them, but there is no single state that a supermajority prefers to CEV?

Comment author: DanArmak 24 December 2010 11:11:07AM 6 points [-]

Or that 99% of human beings all have different maxima so that their superposition is not the maximum of any one of them, but there is no single state that a supermajority prefers to CEV?

Yes. I expect CEV, if it works as advertised, to lead to a state that almost all humans (as they are today, with no major cognitive changes) would see as an acceptable compromise, an improvement over things today, but far worse than their personal desires implemented at the expense of the rest of humankind.

Therefore, while working on CEV of humanity might be a good compromise and cooperation, I expect any group working on it to prefer to implement that group's CEV, instead.

You say that you (and all people on this project) really prefer to take the CEV of all humanity. Please explain to me why - I honestly don't understand. How did you end up with a rare preference among humans, that says "satisfy all humans even though their desires might be hateful to me"?

Comment author: nazgulnarsil 25 December 2010 08:47:50AM *  0 points [-]

"but far worse than their personal desires implemented at the expense of the rest of humankind."

uh....i thought this was sort of the point. also, given holodecks (or experience machines of any sort), I disagree.

EDIT: never mind, conversational context mismatch.

Comment author: DanArmak 25 December 2010 09:13:47AM 1 point [-]

If that's the point, then why does EY prefer it over implementing the CEV of himself and a small group of other people?

As for holodecks (and simulations), as long as people are aware they are in a simulation, I think many would care no less about the state of the external world. (At a minimum they must care somewhat, to make sure their simulation continues to run.)