Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bayesian Utility: Representing Preference by Probability Measures

10 Post author: Vladimir_Nesov 27 July 2009 02:28PM

This is a simple transformation of standard expected utility formula that I found conceptually interesting.

For simplicity, let's consider a finite discrete probability space with non-zero probability at each point p(x), and a utility function u(x) defined on its sample space. Expected utility of an event A (set of the points of the sample space) is the average value of utility function weighted by probability over the event, and is written as

EU(A)=\frac{\sum_{x\in A}{p(x)\cdot u(x)}}{\sum_{x\in A}{p(x)}}

Expected utility is a way of comparing events (sets of possible outcomes) that correspond to, for example, available actions. Event A is said to be preferable to event B when EU(A)>EU(B). Preference relation doesn't change when utility function is transformed by positive affine transformations. Since the sample space is assumed finite, we can assume without loss of generality that for all x, u(x)>0. Such utility function can be additionally rescaled so that for all sample space

\sum_{x}{p(x)\cdot u(x)}=1

Now, if we define

q(x)=p(x)\cdot u(x)

the expected utility can be rewritten as

EU(A)=\frac{\sum_{x\in A}{q(x)}}{\sum_{x\in A}{p(x)}}

or

EU(A)=\frac{Q(A)}{P(A)}

Here, P and Q are two probability measures. It's easy to see that this form of expected utility formula has the same expressive power, so preference relation can be defined directly by a pair of probability measures on the same sample space, instead of using a utility function.

Expected utility written in this form only uses probability of the whole event in both measures, without looking at the individual points. I tentatively call measure Q "shouldness", together with P being "probability". Conceptual advantage of this form is that probability and utility are now on equal footing, and it's possible to work with both of them using the familiar Bayesian updating, in exactly the same way. To compute expected utility of an event given additional information, just use the posterior shouldness and probability:

EU(A|B)=\frac{Q(A|B)}{P(A|B)}

If events are drawn as points (vectors) in (P,Q) coordinates, expected utility is monotone on the polar angle of the vectors. Since coordinates show measures of events, a vector depicting a union of nonintersecting events is equal to the sum of vectors depicting these events:

(P(A\cup B),Q(A\cup B)) = (P(A),Q(A))+(P(B),Q(B)),\ A\cap B=\emptyset

This allows to graphically see some of the structure of simple sigma-algebras of the sample space together with a preference relation defined by a pair of measures. See also this comment on some examples of applying this geometric representation of preference.

Preference relation defined by expected utility this way also doesn't depend on constant factors in the measures, so it's unnecessary to require the measures to sum up to 1.

Since P and Q are just devices representing the preference relation, there is nothing inherently "epistemic" about P. Indeed, it's possible to mix P and Q together without changing the preference relation. A pair (p',q') defined by

\begin{matrix} \left\{\begin{matrix} p' &=& \alpha\cdot p + (1-\beta)\cdot q\\ q' &=& \beta\cdot q + (1-\alpha)\cdot p \end{matrix}\right.\\ \alpha>\beta \end{matrix}

gives the same preference relation,

\frac{Q(A)}{P(A)}>\frac{Q(B)}{P(B)} \Leftrightarrow \frac{Q'(A)}{P'(A)}>\frac{Q'(B)}{P'(B)}

(Coefficients can be negative or more than 1, but values of p and q must remain positive.)

Conversely, given a fixed measure P, it isn't possible to define an arbitrary preference relation by only varying Q (or utility function). For example, for a sample space of three elements, a, b and c, if p(a)=p(b)=p(c), then EU(a)>EU(b)>EU(c) means that EU(a+c)>EU(b+c), so it isn't possible to choose q such that EU(a+c)<EU(b+c). If we are free to choose p, however, an example that has these properties (allowing zero values for simplicity) is a=(0,1/4), b=(1/2,3/4), c=(1/2,0), with a+c=(1/2,1/4), b+c=(1,3/4), so EU(a+c)<EU(b+c).

Prior is an integral part of preference, and it works exactly the same way as shouldness. Manipulations with probabilities, or Bayesian "levels of certainty", are manipulations with "half of preference". The problem of choosing Bayesian priors is in general the problem of formalizing preference, it can't be solved completely without considering utility, without formalizing values, and values are very complicated. No simple morality, no simple probability.

Comments (35)

Comment author: cousin_it 27 July 2009 02:53:35PM *  1 point [-]

Clever! I would have titled it "Couldness and Shouldness", and inserted some sort of pun about "wouldness" at the end.

I don't quite understand the part about mixing. Did you mean 1 >= alpha > beta >= 0 ? If no, some vectors now have negative coordinates and the polar angle becomes an ambiguous ordering. If yes, that's not the general form: why not use any matrix with nonnegative elements and positive determinant?

And I don't understand the last paragraph at all. If X coordinates of points are given, changing the Y coordinates can reorder the polar angles arbitrarily. Or did you simply mean that composite events stay dependent on simple events?

Sorry if those are stupid questions.

Comment author: Vladimir_Nesov 27 July 2009 03:14:49PM 0 points [-]

Mixing: coefficients can be negative or more than 1, but values of p and q must remain positive (added to the post). This is also a way to drive polar angle of the expected utility of the best point of the sample space to pi/2 (look at the bounding parallelogram in (P,Q)).

You can't move the points around independently, since their coordinates are measures, sums of distributions over specific events, so if you move one event, some of the other events move as well. I'll add an example to the article in a moment.

Comment author: Vladimir_Nesov 27 July 2009 04:22:47PM 0 points [-]

Added an example of when it isn't possible to specify arbitrary preference for a given prior, and a philosophical note at the end (related to the "where do the priors come from" debate).

Comment author: Jonathan_Graehl 27 July 2009 08:50:02PM *  0 points [-]

I don't follow the equation of preference and priors in the last paragraph.

Comment author: Vladimir_Nesov 27 July 2009 08:54:45PM 0 points [-]

What do you mean?

Comment author: Jonathan_Graehl 27 July 2009 09:03:34PM 0 points [-]

Prior is an integral part of preference, and it works exactly the same way as shouldness.

Could you demonstrate? I don't understand.

The problem of choosing Bayesian priors is in general the problem of formalizing preference, it can't be solved completely without considering utility

I also don't understand what you mean above.

Comment author: Vladimir_Nesov 27 July 2009 09:51:37PM 1 point [-]

What is usually called "prior" is represented by measure P in the post. Together with "shouldness" Q they constitute the recipe for computing preference over events, through expected utility.

If it's not possible to choose prior more or less arbitrarily and then fill in the gaps using utility to get the correct preference, some priors are inherently incorrect for human preference, and finding the priors that admit completion to the correct preference with fitting utility requires knowledge about preference.

Comment author: Jonathan_Graehl 28 July 2009 05:43:31AM 0 points [-]

I see - by "prior" you mean "current estimate of probability", because P was defined

I've been dealing lately with learning research where "prior" means how likely a given model of probability(outcome) is before any evidence, so maybe I was a little rigid.

In any case, I suggest you consistently use "probability" and drop "prior".

Comment author: Jonathan_Graehl 28 July 2009 05:46:18AM *  0 points [-]

Regarding your second point; I'm not sure how it's rational to choose your beliefs because of some subjective preference order.

Perhaps you could suggest a case where it makes sense to reason from preferences to "priors which make my preferences consistent", because I'm also fuzzy on the details of when and how you propose to do so.

Comment author: timtyler 27 July 2009 05:09:01PM *  -1 points [-]

I've critiqued this "value is complex" [http://lesswrong.com/lw/y3/value_is_fragile/] material before. To summarise from my objections there:

The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome would be much the same - a powerful chess computer.

The supposed complexity is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.

It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. For example, the "look 9 moves ahead" heuristic is a feature when the program is weak, but a serious bug when it grows stronger.

Similarly with complexity of human values: those are a bunch of implementation details to deal with the problem of limited resources - not some kind of representation of the real target.

Comment author: Wei_Dai 27 July 2009 08:07:37PM 1 point [-]

Why was this comment voted down so much (to -4 as of now)? It seems to be a reasonable point, clearly written, not an obvious troll or off-topic. Why does it deserve to be ignored?

Comment author: Jonathan_Graehl 27 July 2009 08:55:19PM *  2 points [-]

It looks like this is a response to the passing link to http://wiki.lesswrong.com/wiki/Complexity_of_value in the article. At first I didn't understand what in the article you were responding to.

Comment author: timtyler 27 July 2009 09:08:21PM *  -2 points [-]

The article it was posted in response to was this one - from the conclusion of the post:

http://wiki.lesswrong.com/wiki/Complexity_of_value

That's a wiki article - which can't be responded to directly. The point I raise is an old controversy now. This message seems rather redundant now - since the question it responded to has subsequently been dramatically edited.

Comment author: Jonathan_Graehl 28 July 2009 05:49:46AM 0 points [-]

Yes, I edited, but before your response. Sorry for the confusion.

Comment author: JGWeissman 27 July 2009 06:03:30PM 1 point [-]

Why are we concerned with the expected utility of some subset of the probability space? To find the expected utility of an action, you should sum over the products of the utility of the point with its conditional probability given that you take that action, over all points in the space. In effect, you are only considering actions that reduce the probability of some points to zero, and then renormalizes the probability of the remaining points.

Comment author: Vladimir_Nesov 27 July 2009 10:21:34PM 1 point [-]

Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent. This treatment of expected utility isn't novel in any way. Any action can be identified with a set of possibilities (outcomes) in which it happens. When you talk of actions that "don't reduce some probabilities to zero", you are actually talking about the effect of the actions on probability distributions of random variables, but behind those random variables is still a probability space on which any information is an element of sigma algebra, or a clear-cut set of possibilities.

Comment author: JGWeissman 27 July 2009 10:38:18PM 1 point [-]

Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent.

How is it formally equivalent? How can I represent the expected utility of an action with arbitrary effects on conditional probability using the average, weighted by unconditional probabilities, of the utility of some subset of the possibilities, as in the post?

Comment author: Vladimir_Nesov 27 July 2009 11:12:00PM *  1 point [-]

Let A be the action (set of possibilities consistent with taking the action), and O set of possible outcomes (each one rated by the utility function, assuming for simplicity that every concrete outcome is considered, not events-outcomes). We can assume . Then:

Comment author: JGWeissman 27 July 2009 11:41:56PM 0 points [-]

As I already explained, that only works for actions that exclude some outcomes and renormalize the probabilities of remaining outcomes, preserving the ratios of their probabilities.

Suppose O had 2 elements, x1 and x2, such that p(x1) = p(x2) = .5. If you take action A, then you have conditional probabilities p(x1|A) = .2 and p(x2|A) = .8. In this case, your transformation of P(x|A) = P(x, A)/P(A) does not work. Because A did not remove x1 as a possibility, it just made it less likely.

Comment author: Vladimir_Nesov 27 July 2009 11:58:10PM *  0 points [-]

P(x|A) = P(x,A)/P(A) is by definition of conditional probability. You are trying to interpret x1 and x2 as events, while in grandparent comment x are elements of the sample space. If you want to consider non-concrete outcomes, compose them from smaller elements. For example, you can have P(O1)=P(O2)=.5, P(O1|A)=.2, P(O2|A)=.8, if O1={x1,x2}, O2={x3,x4}, A={x1,x3}, and p(x1)=.1, p(x2)=.4, p(x3)=.4, p(x4)=.1.

Comment author: Peter_de_Blanc 28 July 2009 04:54:37PM 0 points [-]

How do you calculate P(A)?

Comment author: Vladimir_Nesov 28 July 2009 08:52:04PM 0 points [-]

Trick question? P(A) is just a probability of some event, so depending on the problem it could be calculated in any of the possible ways. "A" can for example correspond to a value of some random variable in a (dynamic) graphical model, taking observations into account, so that its probability value is obtained from belief propagation.

Comment author: jimmy 27 July 2009 06:50:24PM *  1 point [-]

I may be missing your point, but to me, it looks like the summary would be:

If you bundle utility with probability, you can do the same maths, which is nice since it simplifies other things. You cannot prefer certain expected outcomes no matter what your utility function is [neat result, btw].

Since the probability math works, I now call the new thing "probability" and show that you can't find prior "probability" (new definition) without considering the normal definition of probability.

This doesn't change anything about regular probability, or finding priors. It just says that you cannot find out what you instrumentally want apriori without knowing your utility function, which is trivially true.

Comment author: Vladimir_Nesov 27 July 2009 07:04:53PM 1 point [-]

As I said in the first phrase, this is but a "simple transformation of standard expected utility formula that I found conceptually interesting". I don't quite understand the second part of your comment (starting from "Since the probability...").

Comment author: jimmy 27 July 2009 07:32:20PM *  0 points [-]

I agree that it is an interesting transformation, but I think your conclusion ("No simple morality, no simple probability.") does not follow.

Comment author: Vladimir_Nesov 27 July 2009 07:39:35PM 2 points [-]

That argument says that if you pick a prior, you can't "patch" it to become an arbitrary preference by finding a fitting utility function. It's not particularly related to the shouldness/probability representation, and it isn't well-understood, but it's easy to demonstrate by example in this setting, and I think it's an interesting point as well, possibly worth exploring.

Comment author: cousin_it 27 July 2009 09:56:50PM *  0 points [-]

The new version of the post still loses me at about the point where mixing comes in. (What's your motivation for introducing mixing at all?) I would've been happier if it went on about geometry instead of those huge inferential leaps at the end.

And JGWeissman is right, expected utility is a property of actions not outcomes which seems to make the whole post invalid unless you fix it somehow.

Comment author: Vladimir_Nesov 27 July 2009 10:26:28PM 1 point [-]

Any action can be identified with a set of outcomes consistent with the action. See my reply to JGWeissman.

Is the example after mixing unclear? In what way?

Comment author: cousin_it 27 July 2009 10:33:20PM *  2 points [-]

Yes, that's true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.

The math in the example is clear enough, I just don't understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it's trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.

Comment author: Vladimir_Nesov 27 July 2009 10:47:54PM 0 points [-]

It can also happen that the prior happens to be the right one, but it isn't guaranteed. This is a red flag, a possible flaw, something to investigate.

The question of which events are "possible actions" is a many-faceted one, and solving this problem "by definition" doesn't work. For example, if you can pick the best strategy, it doesn't matter what the preference order says for all events except the best strategy, even what it says for "possible actions" which won't actually happen.

Strictly speaking, I don't even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.

Comment author: cousin_it 28 July 2009 07:45:26AM *  1 point [-]

It seems to me that you're changing the subject, or maybe making inferential jumps that are too long for me.

The information to determine which events are possible actions is absent from your model. You can't calculate it within your setting, only postulate.

If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can't tell), then I don't understand how it brings us closer to that goal.

Comment author: Vladimir_Nesov 28 July 2009 11:38:18AM 2 points [-]

The Hofstadter's Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter's Law of Inferential Distance.

Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.

Comment author: JGWeissman 27 July 2009 10:42:31PM 0 points [-]

Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent.

However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.

Comment author: Vladimir_Nesov 13 August 2009 08:41:07PM *  0 points [-]

A couple of random thoughts. From the point of view on prior+utility as vectors in probability-shouldness coordinates, it's easy to see that the ability to rescale and shift utilities without changing preference corresponds to transformations to the shouldness component. These transformations don't change the order on vectors' (events') angles, and so even if we allow shouldness to go negative, expected utility will still work as preference. Similarly, if the shouldness is fixed positive, one could allow rescaling and shifting probability, so that it, too, can go negative.

Another transformation: if we swap the roles of probability and shouldness, the resulting prior+utility will have shouldness of the original system as prior and inverse utility of the original system as utility. In this system, expected utility minimization will describe the same optimization as the expected utility maximization in the original system. The same effect could be achieved by flipping the sign on utility (another symmetry), which can also be easily seen from the probability-shouldness diagram.

Applying both transformations, we get the same preference, but with shouldness of the original system as prior. Utility of the transformed system is negated inverted utility of the original representation. This shows that conceptually, probability distribution and shouldness distribution are interchangeable.

Comment author: othercriteria 14 January 2015 11:42:22PM 0 points [-]

This seems cool but I have a nagging suspicion that this reduces to greater generality and a handful of sentences if you use conditional expectation of the utility function and the Radon-Nikodym theorem?