Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Why the beliefs/values dichotomy? - Less Wrong

20 Post author: Wei_Dai 20 October 2009 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 21 October 2009 12:17:07AM 3 points [-]

It sounds like what you're saying is that independence is a necessary consequence of our preferences having limited information. I had considered this possibility and don't think it's right, because I can give a set of preferences with little independence and also little information, just by choosing the preferences using a pseudorandom number generator.

I think there is still a puzzle here, why our preferences show a very specific kind of structure (non-randomness).

Comment author: Vladimir_Nesov 21 October 2009 01:02:10AM *  3 points [-]

That new preference of yours still can't distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what's known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there's no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.

Comment author: Wei_Dai 21 October 2009 11:38:05AM 1 point [-]

I don't really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn't choose the 3rd box. Can you tell me whether you think that's right, or wrong (meaning I can rationally choose the 3rd box), and why?

To make that example clearer, let's say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.

Comment author: timtyler 21 October 2009 03:53:18PM 0 points [-]

To make the example clearer, surely you would need to explain what the "<apple, orange>" notation was supposed to mean.

Comment author: Wei_Dai 22 October 2009 12:47:56AM 1 point [-]

It's from this paragraph of http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ :

What if you have some uncertainty about which program our universe corresponds to? In that case, we have to specify preferences for the entire set of programs that our universe may correspond to. If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program. More generally, we can always represent your preferences as a utility function on vectors of the form <E1, E2, E3, …> where E1 is an execution history of P1, E2 is an execution history of P2, and so on.

In this case I'm assuming preferences for program executions that aren't independent of each other, so it falls into the "more generally" category.

Comment author: timtyler 22 October 2009 06:21:33AM 0 points [-]

Got an example?

You originally seemed to suggest that <apple, orange> represented some set of preferences.

Now you seem to be saying that it is a bunch of vectors representing possible universes on which some unspecified utility function might operate.