Just to distance this very interesting question from expected utility maximization: "Beliefs" sound like they are about couldness, and values about shouldness. Couldness is about behavior of the environment outside the agent, and shouldness is about behavior of the agent. Of course, the two only really exist in interaction, but as systems they can be conceptualized separately. When an agent asks what it could do, the question is really about what effects in environment could be achieved (some Tarskian hypocrisy here: using "could" to explain "couldness"). Beliefs is what's assumed, and values is what's asserted. In a decision tree, beliefs are associated with knowledge about other agent's possible actions, and values with the choice of the present agent's action. Both are aspects of the system, but playing different roles in the interaction: making a choice versus accepting a choice. Naturally, there is a duality here, when the sides are exchanged: my values become your beliefs, and my beliefs become your values. Choice of representation is not that interesting, as it's all interpretation: nothing changes in behavior.
It seems clear that our preferences do satisfy Independence, at least approximately.
How big of a problem does this simple example signify?
I have a tentative answer for the second question of "Why this representation?". Given that a set of preferences can be represented as a probability function and a utility function, that seems computationally more convenient than using two probability functions, since then you only have to do half of the Bayesian updating.
Another part of this question is that such a set of preferences can usually be decomposed many different ways into probability and utility, so what explains the particular decomposition that we have? I think there should have be...
"Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom."
Paul Churchland calls the belief/values (he says belief/desires) model "folk psychology" and assigns a low probability to it "being smoothly reduced by neuroscience" rather than being completely disregarded like, say, the phlogiston theory of combustion. The paper is called Eliminative Materialism and the Propositional Attitudes and was printed in The Journal of Philosophy. I didn't find the paper all that convincing, but your mileage may vary.
This paper was cited along with another by someone (can't remember who) arguing that the bel...
This comment is directly about the question of probability and utility. The division is not so much about considering the two things separately, as it is about extracting tractable understanding of the whole human preference (prior+utility) into a well-defined mathematical object (prior), while leaving all the hard issues with elicitation of preference in the utility part. In practice it works like this: a human conceptualizes a problem so that a prior (that is described completely) can be fed to an automatic tool, then tool's conclusion about the aspect s...
It's not the result of an "accidental" product of evolution that organisms are goal-directed and have values. Evolution made creatures that way for a reason - organisms that pursue their biological goals (without "updating" them) typically have more offspring and leave more descendants.
Mixing up your beliefs and values would be an enormous mistake - in the eyes of evolution. You might then "update" your values - trashing them in the process - a monumental disaster for your immortal coils.
This assumption is central to establishing the mathematical structure of expected utility maximization, where you value each possible world separately using the utility function, then take their weighted average. If your preferences were such that A&C > B&C but A&D < B&D, then you wouldn’t be able to do this.
I can imagine having preferences that don't value each possible world separately. I can also imagine doing other things to my utility function than maximising expectation. For example, if I maximised the top quartile of expecte...
Here, have a mathematical perspective that conflates beliefs and values:
Suppose that some agent is given a choice between A and B. A is an apple. B is an N chance of a banana, otherwise nothing. The important thing here is the ambivalence equation: iff U(apple) = N*U(banana), the agent is ambivalent between the apple and the banana. Further suppose that N is 50%, and the agent likes bananas twice as much as it likes apples. In this case, at least, the agent might as well modify itself to believe that N is 20% and to like bananas five times as much as apple...
I think values (in a finite agent), also need to have some role in what beliefs "should" be stored/updated/remembered. Of course in theories which don't constrain the agents computational ability this isn't needed.
I dispute your premise: what makes you so sure people do decompose their thoughts into beliefs and values, and find these to be natural, distinct categories? Consider the politics as mind-killer phenomenon. That can be expressed as, "People put your words into a broader context of whether they threaten their interests, and argue for or against your statements on that basis."
For example, consider the difficulty you will have communicating your position if you believe both a) global warming is unlikely to cause any significant problems in the bus...
Maybe these are to do with differences across individuals. My beliefs/values may be mashed togather and impossible to seperate, but I expect other people's beliefs to mirror my own more closely than their values do.
Because it's much easier to use beliefs shorn of values as building blocks in a machine that does induction, inference, counterfactual reasoning, planning etc compared to belief-values that are all tied up together.
Sea slugs and Roombas don't have the beliefs/values separation it because the extra complexity isn't worth it. Humans have it to some degree and rule the planet. AIs might have even more success.
Some of this is going over my head, but...
I think you need to specify if you're talking about terminal values or instrumental values.
There's obviously a big difference between beliefs and terminal values. Beliefs are inputs to our decision-making processes, and terminal values may be as well. However, while beliefs are outputs of the belief-making processes whose inputs are our perceptions, terminal values aren't the output of any cognitive process, or they wouldn't be terminal.
As for instrumental values, well, yes, they are beliefs about the best values ...
I think I tried to solve a similar problem before: that of looking at the simplest possible stable control system and seeing how I can extract the system's "beliefs" and "values" that result in it remaining stable. Then, see if I can find a continuous change between the structure of that system, and a more complex system, like a human.
For example, consider the simple spring-mass-damper system. If you move it from its equlibrium position xe, it will return. What do the concepts of "belief" and "value" map onto here...
Is it possible that the dichotomy between beliefs and values is just an accidental byproduct of our evolution, perhaps a consequence of the specific environment that we’re adapted to, instead of a common feature of all rational minds?
In the normal usage, "mind" implies the existence of a distinction between beliefs and values. In the LW/OB usage, it implies that the mind is connected to some actuators and sensors which connect to an environment and is actually doing some optimization toward those values. Certainly "rational mind" ent...
I'd like to suggest that the fact that human preferences can be decomposed into beliefs and values is one that deserves greater scrutiny and explanation. It seems intuitively obvious to us that rational preferences must decompose like that (even if not exactly into a probability distribution and a utility function), but it’s less obvious why.
The importance of this question comes from our tendency to see beliefs as being more objective than values. We think that beliefs, but not values, can be right or wrong, or at least that the notion of right and wrong applies to a greater degree to beliefs than to values. One dramatic illustration of this is in Eliezer Yudkowsky’s proposal of Coherent Extrapolated Volition, where an AI extrapolates the preferences of an ideal humanity, in part by replacing their "wrong” beliefs with “right” ones. On the other hand, the AI treats their values with much more respect.
Since beliefs and values seem to correspond roughly to the probability distribution and the utility function in expected utility theory, and expected utility theory is convenient to work with due to its mathematical simplicity and the fact that it’s been the subject of extensive studies, it seems useful as a first step to transform the question into “why can human decision making be approximated as expected utility maximization?”
I can see at least two parts to this question:
Not knowing how to answer these questions yet, I’ll just write a bit more about why I find them puzzling.
Why this mathematical structure?
It’s well know that expected utility maximization can be derived from a number of different sets of assumptions (the so called axioms of rationality) but they all include the assumption of Independence in some form. Informally, Independence says that what you prefer to happen in one possible world doesn’t depend on what you think happens in other possible worlds. In other words, if you prefer A&C to B&C, then you must prefer A&D to B&D, where A and B are what happens in one possible world, and C and D are what happens in another.
This assumption is central to establishing the mathematical structure of expected utility maximization, where you value each possible world separately using the utility function, then take their weighted average. If your preferences were such that A&C > B&C but A&D < B&D, then you wouldn’t be able to do this.
It seems clear that our preferences do satisfy Independence, at least approximately. But why? (In this post I exclude indexical uncertainty from the discussion, because in that case I think Independence definitely doesn't apply.) One argument that Eliezer has made (in a somewhat different context) is that if our preferences didn’t satisfy Independence, then we would become money pumps. But that argument seems to assume agents who violate Independence, but try to use expected utility maximization anyway, in which case it wouldn’t be surprising that they behave inconsistently. In general, I think being a money pump requires having circular (i.e., intransitive) preferences, and it's quite possible to have transitive preferences that don't satisfy Independence (which is why Transitivity and Independence are listed as separate axioms in the axioms of rationality).
Why this representation?
Vladimir Nesov has pointed out that if a set of preferences can be represented by a probability function and a utility function, then it can also be represented by two probability functions. And furthermore we can “mix” these two probability functions together so that it’s no longer clear which one can be considered “beliefs” and which one “values”. So why do we have the particular representation of preferences that we do?
Is it possible that the dichotomy between beliefs and values is just an accidental byproduct of our evolution, perhaps a consequence of the specific environment that we’re adapted to, instead of a common feature of all rational minds? Unlike the case with anticipation, I don’t claim that this is true or even likely here, but it seems to me that we don’t understand things well enough yet to say that it’s definitely false and why that's so.