Nick_Tarleton comments on How Not to be Stupid: Know What You Want, What You Really Really Want - Less Wrong

0 Post author: Psy-Kosh 28 April 2009 01:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 28 April 2009 05:27:10PM *  0 points [-]

If you, however, say something like "I don't prefer A less than B, nor more than B, nor equally to B", I'm just going to give you a very stern look until you realize you're rather confused.

The simplest way to show the confusion here is just to ask: if those are your only two options (or both obviously dominate all other options – say, Omega will blow up the world if you don't pick one, though it might be best to avoid invoking Omega when you can avoid it), how do you choose?

Comment author: MendelSchmiedekamp 28 April 2009 05:38:26PM *  1 point [-]

What is invalid with answering? "By performing further computation and evidence gathering."

And if Omega doesn't give that option, then that significantly changes the state of the world, and hence your priority function - including the priority of assigning the relative priority between A and B.

As I said on the top level post, you can't treat this priority assignment as non-self-referential.

Edited to add: You should not call people confused because they don't want to cache thoughts for which they do not as of yet know the answer.

Comment author: Nick_Tarleton 29 April 2009 11:36:07PM *  1 point [-]

Yes, the question only applies to final, stable preferences – but AFAIK not everyone agrees that final, stable preferences should be totally ordered.

Comment author: MendelSchmiedekamp 30 April 2009 02:33:45AM 0 points [-]

How do you conclude that a preference is final and stable?

That seems an extremely strong statement to be making about the inner workings of your own mind.

Comment author: Nick_Tarleton 30 April 2009 06:29:53PM 1 point [-]

I don't believe any of my own preferences are final and stable. The intent is to characterize the structure that (I believe) an idealized agent would have / that the output of my morality has / that I aim at (without necessarily ever reaching).