tut comments on Nonparametric Ethics - Less Wrong

27 Post author: Eliezer_Yudkowsky 20 June 2009 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread. Show more comments above.

Comment author: tut 21 June 2009 11:49:04AM *  1 point [-]

A major problem in Friendly AI is how to extrapolate human morality into >transhuman realms. I don't know of any parametric approach to this problem that >isn't without serious difficulties, but "nonparametric" doesn't really seem to help >either. What does your advice "don't extrapolate if you can possibly avoid it" imply in >this case? Pursue a non-AI path instead?

I think it implies that a Friendly sysop should not dream up a transhuman society and then try to reshape humanity into that society, but rather let us evolve at our own pace just attending to things that are relevant at each time.