Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on Why the beliefs/values dichotomy? - Less Wrong

20 Post author: Wei_Dai 20 October 2009 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 20 October 2009 08:26:37PM *  2 points [-]

This comment is directly about the question of probability and utility. The division is not so much about considering the two things separately, as it is about extracting tractable understanding of the whole human preference (prior+utility) into a well-defined mathematical object (prior), while leaving all the hard issues with elicitation of preference in the utility part. In practice it works like this: a human conceptualizes a problem so that a prior (that is described completely) can be fed to an automatic tool, then tool's conclusion about the aspect specified as probability is interpreted by a human again. People fill in the utility part by using their preference, even though they can't represent it as the remaining utility part. Economists, having to create autonomous models of decision-making (as distinct from autonomous decision-making systems), have to introduce the whole preference, but it's so approximate that it's of no use in most other contexts.

Because of the utility-prior divide of human preference in practice of human decision-making, with only prior in the domain of things that are technically understood, there is a strong association of prior with "knowledge" (hence "belief", but being people of science we expel feeling-associated connotations from the concept), while utility remains vague, but is a necessary part that completes the picture to the expression of whole preference, hence introduction of utility to a problem is strongly associated with values.

Comment author: Wei_Dai 20 October 2009 10:32:18PM 2 points [-]

But why do human preferences exhibit the (approximate) independence which allows the extraction to take place?

Comment author: SilasBarta 20 October 2009 10:49:08PM *  3 points [-]

Simple. They don't.

Maybe it's just me, but this looks like another case of overextrapolation from a community of rationalists to all of humanity. You think about all the conversations you've had distinguishing beliefs from values, and you figure everyone else must think that way.

In reality, people don't normally make such a precise division. But don't take my word for it. Go up to your random mouthbreather and try to find out how well they adhere to a value/belief distinction. Ask them whether the utility assigned to an outcome, or its probability was a bigger factor.

No one actually does those calculations consciously; if anything like it is done non-consciously, it's extremely economical in computation.

Comment author: Vladimir_Nesov 20 October 2009 10:51:17PM *  1 point [-]

Simple: the extraction cuts across preexisting independencies. (I don't quite see what you refer to by "extraction", but my answer seems general enough to cover most possibilities.)

Comment author: Wei_Dai 20 October 2009 11:01:31PM 1 point [-]

I'm referring to the extraction that you were talking about: extracting human preference into prior and utility. Again, the question is why the necessary independence for this exists in the first place.

Comment author: Vladimir_Nesov 20 October 2009 11:06:03PM *  2 points [-]

I was talking about extraction of prior about a narrow situation as the simple extractable aspect of preference, period. Utility is just the rest, what remains unextractable in preference.

Comment author: Wei_Dai 20 October 2009 11:23:44PM 1 point [-]

Ok, I see. In that case, do you think there is still a puzzle to be solved, about why human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences), or not?

Comment author: Vladimir_Nesov 20 October 2009 11:41:42PM *  3 points [-]

That's just a different puzzle. You are asking a question about properties of human preference now, not of prior/utility separation. I don't expect strict independence anywhere.

Independence is indifference, due to inability to see and precisely evaluate all consequences, made strict in form of probability, by decree of maximum entropy. If you know your preference about an event, but no preference/understanding on the uniform elements it consists of, you are indifferent to these elements -- hence maximum entropy rule, air molecules in the room. Multiple events for which you only care in themselves, but not in the way they interact, are modeled as independent.

[W]hy human preferences seem to have a large amount of independence (compared to, say, a set of randomly chosen transitive preferences)[?]

Randomness is info, so of course the result will be more complex. Where you are indifferent, random choice will fill in the blanks.

Comment author: Wei_Dai 21 October 2009 12:17:07AM 3 points [-]

It sounds like what you're saying is that independence is a necessary consequence of our preferences having limited information. I had considered this possibility and don't think it's right, because I can give a set of preferences with little independence and also little information, just by choosing the preferences using a pseudorandom number generator.

I think there is still a puzzle here, why our preferences show a very specific kind of structure (non-randomness).

Comment author: Vladimir_Nesov 21 October 2009 01:02:10AM *  3 points [-]

That new preference of yours still can't distinguish the states of air molecules in the room, even if some of these states are made logically impossible by what's known about macro-objects. This shows both the source of dependence in precise preference and of independence in real-world approximations of preference. Independence remains where there's no computed info that allows to bring preference in contact with facts. Preference is defined procedurally in the mind, and its expression is limited by what can be procedurally figured out.

Comment author: Wei_Dai 21 October 2009 11:38:05AM 1 point [-]

I don't really understand what you mean at this point. Take my apples/oranges example, which seems to have nothing to do with macro vs. micro. The Axiom of Independence says I shouldn't choose the 3rd box. Can you tell me whether you think that's right, or wrong (meaning I can rationally choose the 3rd box), and why?

To make that example clearer, let's say that the universe ends right after I eat the apple or orange, so there are no further consequences beyond that.