Vladimir_Nesov comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (368)
I don't think that's right, or EY's position (I'd like evidence on that). Who's to say that maximization is precisely what's right? That might be a very good heuristic, but upon reflection the AI might decide to self-improve in a way that changes this subgoal (of the overall decision problem that includes all the other decision-making parts), by finding considerations that distinguish maximizing attitude to utility and the right attitude to utility. It would of course use its current utility-maximizing algorithm to come to that decision. But the conclusion might be that too much maximization is bad for environment or something. The AI would stop maximizing for the reason it's not the most maximizing thing, the same way as a person would not kill for the reason that action leads to a death, even though avoid-causing-death is not the whole morality and doesn't apply universally.
See also this comment.