Wei_Dai comments on Pluralistic Moral Reductionism - Less Wrong

33 Post author: lukeprog 01 June 2011 12:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (316)

You are viewing a single comment's thread. Show more comments above.

Comment deleted 05 June 2011 06:51:11PM *  [-]
Comment author: Wei_Dai 05 June 2011 09:45:06PM *  2 points [-]

If I am right that the whole purpose of this sequence has to do with friendly AI research,

Luke confirms your guess in the comments section of this post.

then how is it useful to devote so much resources to explaining the basics, instead of trying to figure out how to design, or define mathematically, something that could extrapolate human volition?

I can see a couple of benefits of going over the basics first. One, it lets Luke confirm that his understanding of the basics are correct, and two, it can interest others to work in the same area (or if they are already interested, can help bring them up to the same level as Luke). I would like to see Luke continue this sequence. (Unless of course he already has some good new ideas about FAI, in which case write those down first!)

Comment author: XiXiDu 06 June 2011 08:26:59AM *  1 point [-]

But how does ethics matter for friendly AI? If a friendly AI is going to figure out what humans desire, by extrapolating their volition, might it conclude that our volition is immoral and therefore undesirable?

Comment author: Will_Sawin 06 June 2011 11:09:46AM 1 point [-]

What morality unit would it have other than human's volition?

If it has another, separate volition unit, yes.

If not, then only if humans fundamentally self-contradict, which seems unlikely, because biological systems are pretty robust to that.

Comment author: XiXiDu 06 June 2011 11:35:18AM 0 points [-]

What morality unit would it have other than human's volition?

I am not sure what a 'morality unit' is supposed to be or how it would be different from a volition unit. Either morality is part of our volition, instrumental or an imperative. In each case one could ask what we want and arrive at morality.

Comment author: Will_Sawin 06 June 2011 03:51:09PM 3 points [-]

What I'm saying is that: If Clippy tried to calculate our volition, he would conclude that our volition is immoral. (Probably. Maybe our volition IS paperclips.)

But if we programmed an AI to calculate our volition and use that as its volition, and our morality as its morality, and so on, then it would not find our volition immoral unless we find our volition immoral, which seems unlikely.

Comment author: Peterdjones 06 June 2011 04:04:55PM 1 point [-]

An AI that was smarter than us might deduce that we were not applying the Deep Structure of our morality properly because of bias or limited intelligence. It might conclude that human morality requires humans to greatly reduce their numbers in order to lessen the impact on other species, for instance.

Comment author: lukeprog 08 June 2011 07:13:05AM 0 points [-]

I can see a couple of benefits of going over the basics first. One, it lets Luke confirm that his understanding of the basics are correct, and two, it can interest others to work in the same area (or if they are already interested, can help bring them up to the same level as Luke).

Correct!