Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 04 February 2011 01:33:00PM *  1 point [-]

I defy the possibility that we may "not care about logic" in the sense that you suggest.

In a decision between what's logical and what's right, you ought to choose what's right.

Comment author: TheOtherDave 04 February 2011 03:50:19PM *  3 points [-]

If you can summarize your reasons for thinking that's actually a conflict that can arise for me, I'd be very interested in them.

Comment author: Vladimir_Nesov 04 February 2011 05:40:06PM 4 points [-]

Consider a possible self-improvement that changes your inference system in such a way that it (1) becomes significantly more efficient at inferring the kinds of facts that help you with making right decisions, and (2) obtains an additional tiny chance of being inconsistent. If all you care about is correctness, then notice that implementing this self-improvement will make you less correct, will increase the probability that you'll produce incorrect inferences in the future. On the other hand, expected utility of this decision argues that you should take it. This is a conflict, resolved either by self-improving or not.

Comment author: TheOtherDave 04 February 2011 07:04:18PM 0 points [-]

That's fair. Yes, agreed that this is a decision between maximizing my odds of being logical and maximizing my odds of being right, which is a legitimate example of the conflict you implied. And I guess I agree that if being right has high utility then it's best to choose what's right.

Thanks.

Comment author: Vladimir_Nesov 04 February 2011 07:07:51PM *  3 points [-]

And I guess I agree that if being right has high utility then it's best to choose what's right.

Seeking high utility is right (and following rules of logic is right), not the other way around. "Right" is the unreachable standard by which things should be, which "utility" is merely a heuristic for representation of.

Comment author: TheOtherDave 04 February 2011 07:15:32PM 0 points [-]

It isn't clear to me what that statement, or its negation, actually implies about the world. But I certainly don't think it's false.