Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Why the beliefs/values dichotomy? - Less Wrong

20 Post author: Wei_Dai 20 October 2009 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 20 October 2009 11:01:31PM 1 point [-]

I'm referring to the extraction that you were talking about: extracting human preference into prior and utility. Again, the question is why the necessary independence for this exists in the first place.

Comment author: Vladimir_Nesov 20 October 2009 11:06:03PM *  2 points [-]

I was talking about extraction of prior about a narrow situation as the simple extractable aspect of preference, period. Utility is just the rest, what remains unextractable in preference.

Comment author: Wei_Dai 20 October 2009 11:23:44PM 1 point [-]

Ok, I see. In that case, do you think there is still a puzzle to be solved, about why human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences), or not?

Comment author: Vladimir_Nesov 20 October 2009 11:41:42PM *  3 points [-]

That's just a different puzzle. You are asking a question about properties of human preference now, not of prior/utility separation. I don't expect strict independence anywhere.

Independence is indifference, due to inability to see and precisely evaluate all consequences, made strict in form of probability, by decree of maximum entropy. If you know your preference about an event, but no preference/understanding on the uniform elements it consists of, you are indifferent to these elements -- hence maximum entropy rule, air molecules in the room. Multiple events for which you only care in themselves, but not in the way they interact, are modeled as independent.

[W]hy human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences)[?]

Randomness is info, so of course the result will be more complex. Where you are indifferent, random choice will fill in the blanks.

Comment author: Wei_Dai 21 October 2009 12:17:07AM 3 points [-]

It sounds like what you're saying is that independence is a necessary consequence of our preferences having limited information. I had considered this possibility and don't think it's right, because I can give a set of preferences with little independence and also little information, just by choosing the preferences using a pseudorandom number generator.

I think there is still a puzzle here, why our preferences show a very specific kind of structure (non-randomness).

Comment author: Vladimir_Nesov 21 October 2009 01:02:10AM *  3 points [-]

That new preference of yours still can't distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what's known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there's no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.

Comment author: Wei_Dai 21 October 2009 11:38:05AM 1 point [-]

I don't really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn't choose the 3rd box. Can you tell me whether you think that's right, or wrong (meaning I can rationally choose the 3rd box), and why?

To make that example clearer, let's say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.

Comment author: timtyler 21 October 2009 03:53:18PM 0 points [-]

To make the example clearer, surely you would need to explain what the "<apple, orange>" notation was supposed to mean.

Comment author: Wei_Dai 22 October 2009 12:47:56AM 1 point [-]

It's from this paragraph of http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ :

What if you have some uncertainty about which program our universe corresponds to? In that case, we have to specify preferences for the entire set of programs that our universe may correspond to. If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program. More generally, we can always represent your preferences as a utility function on vectors of the form <E1, E2, E3, …> where E1 is an execution history of P1, E2 is an execution history of P2, and so on.

In this case I'm assuming preferences for program executions that aren't independent of each other, so it falls into the "more generally" category.

Comment author: timtyler 22 October 2009 06:21:33AM 0 points [-]

Got an example?

You originally seemed to suggest that <apple, orange> represented some set of preferences.

Now you seem to be saying that it is a bunch of vectors representing possible universes on which some unspecified utility function might operate.