Vladimir_Nesov comments on Human values differ as much as values can differ - Less Wrong

13 Post author: PhilGoetz 03 May 2010 07:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 03 May 2010 08:53:35PM *  2 points [-]

I hear you, but I believe it's a very strange and unstable definition. When you say that you want AI that "optimizes X", you implicitly want X to be optimized is a way in which you'd want it optimized, understood in the way you want it understood, etc. Failing to also specify your whole morality as interpreter for "optimize X" will result in all sorts of unintended consequences, making any such formal specification unrelated to the subject matter that you intuitively wanted to discuss by introducing the "optimize X" statement.

In the context of superintelligent AI, this means that you effectively have to start with a full (not-just-putatively-)FAI and then make a wish. But what should FAI do with your wish, in terms of its decisions, in terms of what it does with the world? Most likely, completely disregard the wish. This is the reason there are no purple button FAIs.

Comment author: RobinZ 03 May 2010 09:04:05PM 0 points [-]

I don't disagree with you. I was just responding to the challenge set in the post.