Cyan comments on Nonparametric Ethics - Less Wrong

27 Post author: Eliezer_Yudkowsky 20 June 2009 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 21 June 2009 10:31:06AM *  5 points [-]

Parametric extrapolation actually works quite well in some cases. I'll cite a few examples that I'm familiar with:

I don't see any examples of nonparametric extrapolation that have similar success.

A major problem in Friendly AI is how to extrapolate human morality into transhuman realms. I don't know of any parametric approach to this problem that isn't without serious difficulties, but "nonparametric" doesn't really seem to help either. What does your advice "don't extrapolate if you can possibly avoid it" imply in this case? Pursue a non-AI path instead?

Comment author: Cyan 22 June 2009 02:02:10AM *  1 point [-]

What does your advice "don't extrapolate if you can possibly avoid it" imply in this case?

I distinguish "extrapolation" in the sense of an extending an empirical regularity (as in Moore's law) from inferring a logical consequence of of well-supported theory (as in the black hole prediction). This is really a difference of degree, not kind, but for human science, this distinction is a good abstraction. For FAI, I'd say the implication is that an FAI's morality-predicting component should be a working model of human brains in action.