Previously: "objective probabilities", but more importantly knowing what you want

Slight change of plans: the only reason I brought up the "objective" probabilities as early as I did was to help establish the idea of utilities. But with all the holes that seem to need to be patched to get from one to the other (continuity, etc), I decided to take a different route and define utilities more directly. So, for now, forget about "objective probabilities" and frequencies for a bit. I will get back to them a bit later on, but for now am leaving them aside.

So, we've got preference rankings, but not much of a sense of scale yet. We don't have much way of asking "how _much_ do you prefer this to that?" That's what I'm going to deal with in this post. There will be some slightly roundabout abstract bits here, but they'll let me establish utilities. And once I have that, more or less all I have to do is just use utilities as a currency to apply dutch book arguments to. (That will more or less be the shape of the rest of the sequence) The basic idea here is to work out a way of comparing the magnitudes of the differences of preferences. ie, How much you would have prefered some A2 to A1 vs how much you would have prefered some B2 to B1. But it seems difficult to define, no? "how much would you have wanted to switch reality to A2, if it was in state A1, vs how much would you have wanted to switch reality to B2, given that it was in B1?"

So far, the best solution I can think of is to ask "if you are equally uncertain about whether A1 or B1 is true, would you prefer to replace A1, if it would have been true, with A2, or similar for B1 to B2"? Specifically, supposing you're in a state of complete uncertainty with regards to two possible states/outcomes A1 or B1, so that you'd be equally surprised by both. Then consider that, instead of keeping that particular set of two possibilities, you have to choose between two substitutions: you can choose to either conditionally replace A1 with A2 (that is, if A1 would have been the outcome, you get A2 instead) _or_ you can choose to replace B1 with B2 in the same sense. So, you have to choose between (A2 or B1) and (A1 or B2) (where, again, your state of uncertainty is such that you'd be equally surprised by either outcome. That is, you can imagine that whatever it is that's giving rise to your uncertainty is effectively controlling both possibilities. You simply get to decide which of those are wired to the source of uncertainty) If you choose the first, then we will say that the amount of difference in your preference between A2 and A1 is bigger than between B2 and B1. And vice versa. And if you're indifferent, we'll say the preference difference of A2 vs A1 = the preference difference of B2 vs B1.

But wait! You might be saying "oh sure, that's all nice, but why the fluff should we consider this to obey any form of transitivity? Why should we consider this sort of comparison to actually correspond to a real ordered ranking of these things?" I'm glad you asked, because I'm about to tell you! Isn't that convinient? ;)

First, I'm going to introduce a slightly unusual notation which I don't expect to ever need use again. I need it now, however, because I haven't established epistemic probabilities, yet I need to be able to talk about "equivalent uncertainties" without assuming "uncertainty = probability" (which I'll basically be establishing over the next several posts.)

A v B v C v D ... will be defined to mean that you're in a state of uncertainty such that you'd be equally surprised by any of those outcomes. (Obviously, this is commutative. A v B v C is the same state as C v A v B, for instance.)

Next, I need to establish the following principle:

If you prefer Ai v Bi v Ci v... to Aj v Bj v Cj v..., then you should prefer Ai v Bi v Ci v... v Z to Aj v Bj v Cj v... v Z.

If this seems familiar, it should. However, this is a bit more abstract, since we don't yet have a way to measure uncertainty. I'm just assuming here that one can meaningfully say things like "I'd be equally surprised either way." We'll later revisit this argument once we start to apply a numerical measure to our uncertainty.

To deal with a couple possible ambiguities, first imagine you use the same source of uncertainty no matter which outcome you choose. So the only thing you get to choose is which outcomes are plugged into the "consequence slots". Then, imagine that you switch the source of uncertainty with an equivalent one. Unless you place some inherent value in something about whatever it is that is leading to your state of uncertainty or you have additional information (in which case it's not an equivalent amount of uncertainty, so doesn't even apply here), you should value it the same either way, right? Basically an "it's the same, unless it's different" principle. :) But for now, if it helps, imagine it's the same source of uncertainty, just different consequences plugged in. Suppose you prefer the Aj v Bj ... v Z to Ai v Bi ... v Z. You have equal amount of expectation (in the informal sense) of Z in either case, by assumption, so makes no difference which of the two you select as far as Z is concerned. And if Z doesn't happen, you're left with the rest. (Assuming appropriate mutual exclusiveness, etc...) So that leaves you back at the "i"s vs the "j"s, but by assumption you already prefered, overall, the set of "i"s to the set of "j"s. So the result of prefering the second set with Z simply means that either Z is the outcome, which could have happened the same either way or if not Z, then what you have left is equivalent to the Aj v Bj v... option, which, by assumption, you prefer less than the "i"s. So, effectively, you either gain nothing, or end up with a set of possibilities that you prefer less overall. By the power vested in me by Don't Be Stupid, I say that therefore if you prefer Ai v Bi v ... to Aj v Bj v ..., then you must have the same preference ordering when the Z is tacked on.

There is, however, a possibility that we haven't quite eliminated via the above construction: being indifferent to Ai v Bi v... vs Aj v Bj v... while actually prefering one of the versions with the Z tacked on. All I can say to that is "unless you, explicitly, in your preference structure have some term for certain types of sources of uncertainty set up in certain ways leading to certain preferences" here, I don't see any reasonable way that should be happening. ie, where would the latter preference be arising from, if it's not arising from prefereces relating to the individual possibilities?

I admit, this is a weak point. In fact, it may be the weakest part, so if anyone has any actual concrete objections to this bit, I'd be interested in hearing it. But the "reasonableness" criteria seems reasonable here. So, for now at least, I'm going to go with it as "sufficiently established to move on."

So let's get to building up utilities: Suppose the preference difference of A2 vs A1 is larger than preference difference B2-B1, which is larger than preference difference of C2 vs C1.

Is preference difference A2 vs A1 larger than C2 vs C1, in terms of the above way for comparing the magnitudes of preference differences?

Let's find out (Where >, <, and = are being used to represent preference relations)

We have

A2 v B1 > A1 v B2

We also have

B2 v C1 > B1 v C2

Let's now use our above theorem of being able to tack on a "Z" without changing preference ordering.

The first one we will transform into (by tacking an extra C1 onto both sides):

A2 v B1 v C1 > A1 v B2 v C1

The second comparison will be transfomed into (by tacking an extra A1 onto both sides):

A1 v B2 v C1 > A1 v B1 v C2

aha! now we've got an expression that shows up in both the top and the bottom. Specifically A1 v B2 v C1

By earlier postings, we've already established that prefence rankings are transitive, so we must therefore derive:

A2 v B1 v C1 > A1 v B1 v C2

And, again, by the above rule, we can chop off a term that shows up on both sides, specifically B1:

A2 v C1 > A1 v C2

Which is our definition for saying the preference difference between A2 and A1 is larger than that between C2 and C1. (given equal expectation (in the informal sense) of A1 or C1, you'd prefere to replace the possibility A1 with A2 then to replace the possibility C1 with C2). And a similar argument applies for equality. So there, we've got transitivity for our comparisons of differences of preferences. Woooo!

Well then, let us call W a utility function if it has the property that W(B2) - W(B1)  =/>/< W(A2) - W(A1) implies the appropriate relation applies to the preference differences. For example, if we have this:

W(B2) - W(B1) > W(A2) - W(A1), then we have this:

B2 v A1 > B1 v A2.

(and similar for equality.)

In other other words, differences of utility act as an internal currency. Gaining X points of utility corresponds to a climb up your preference ranking that's worth the same no matter what the starting point is. This gives us something to work with.

Also, note the relations will hold for arbitrary shifting of everything by an equal amount, and by multiplying everything by some positive number. So, basically, you can do a (positive) affine transform on the whole thing and still have all the important properties retained, since all we care about are the relationships between differences, rather than the absolute values.

And there it is, that's utilities. An indexing of preference rankings with the special property that differences between those indices actually corresponds in a meaningful way to _how much_ you prefer one thing to another.

New Comment
17 comments, sorted by Click to highlight new comments since:

I'm a little puzzled by what it is you're trying to do here. It feels as though you're reinventing the wheel, but I have no clear sense of whether that's what you see yourself as doing, and if not, why you think your wheel is different from existing ones.

(This may just be a communication issue, but it might be worth clarifying.)

Basically I'm trying to derive decision theory as the approximate unique solution to "don't automatically lose"

It occurred to me that someone should be doing something like this on OB or LW, so... I'm making a go of it.

I'm afraid that's still a little too vague for me to make much sense of. What decision theory are you trying to derive? How does this particular decision theory differ (if at all) from other decision theories out there. If you're deriving it from different premises/axioms than other people already have, how do these relate to existing axiomatizations?

Perhaps most importantly, why does this need to be done from scratch on OB/LW? I could understand the value of summarizing existing work in a concise and intuitive fashion, but that doesn't seem to be what you're doing.

Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to "why this math rather than some other?"

I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It's more "here's what I needed to think through to really satisfy myself with this." But yeah, I'm just going for ordinary Bayesian decision theory and epistemic probabilities. That's all. I'm not trying to do anything really novel here.

I'm not so much axiomatizing as much as working from the "don't automatically lose" rule.

I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It's more "here's what I needed to think through to really satisfy myself with this."

Just a thought, but I wonder whether, it might work better to:

  1. start with the dutch book arguments;
  2. explicitly state the preconditions necessary for them to work;
  3. gradually build backwards, filling in the requirements for the most problematic preconditions first

This would still need to be done well, but it has the advantage that it's much clearer where you're going with everything, and what exactly it would be that you're trying to show at each stage.

At the moment, for example, I'm having difficulty evaluating your claim to have shown that utility "indices actually corresponds in a meaningful way to how much you prefer one thing to another". One reason for that is that the claim is ambiguous. There's an interpretation on which it might be true, and at least one interpretation on which I think it's likely to be false. There may be other interpretations that I'm entirely missing. If I knew what you were going to try to do with it next, it would be much easier to see what version you need.

Taking this approach would also mean that you can focus on the highest value material first, without getting bogged down in potentially less relevant details.

Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to "why this math rather than some other?"

That's only if the derivation is good. I warned you that you are going to shoot your feet off, if you are not really prepared. Even the classical axiomatizations have some problems with convincing people to trust in them.

It's somewhat less constructive than reinventing the wheel, actually. It's axiomatic, not empirical.

If A2-A1 > B2-B1, and A1 = B1, then A2 + B1 > A1 + B2 is about as insightful as 4+2 > 2+3, or 2n +2 > 2n +1. Once you set your definitions, the meaning of ">" does pretty much all your work for you.

As I understand it, your goal is to derive some way of assigning utilities to different outcomes such that they maintain preference ranking. I believe this could be done about as well with:

"Assume all outcomes can be mapped to discrete util values, and all utils are measured on the same scale."

This gives you all of the properties you've described and argued for in the last several posts, I believe, and it takes rather less effort. I believe you've assumed this implictly, though you haven't actually admitted as much. Your system follows from that statement if it is true. If it's false, your system cannot stand. It's also rather easier to understand than these long chains of reasoning to cross very small gaps.

It's somewhat less constructive than reinventing the wheel, actually. It's axiomatic, not empirical.

The whole von Neumann-Morgenstern edifice (which is roughly what Psy-Kosh seems to be reconstructing in a roundabout way) is axiomatic. That doesn't make it worthless.

assigning utilities to different outcomes such that they maintain preference ranking ...could be done about as well with [the assumption that] all outcomes can be mapped to discrete util values.

Well, yes. You can derive X from the assumption that X is true, but that doesn't seem very productive. (I didn't think Psy-Kosh claimed, or needs to claim (yet), that all utils are measured on the same scale, but I could be wrong. Not least because that statement could mean a variety of different things, and I'm not sure which one you intend.)

Only some preference orderings can be represented by a real-valued utility functions. Lexicographic preferences, for example, cannot. Nor can preferences which are, in a particular sense, inconsistent (e.g. cyclic preferences).

My sense is that Psy-Kosh is trying to establish something like a cardinally measurable utility function, on the basis of preferences over gambles. This is basically what vNM did, but (a) as far as I can tell, they imposed more structure on the preferences; (b) they didn't manage to do it without using probabilities; and (c) there's a debate about the precise nature of the "cardinality" they established. The standard position, as I understand it, is that they actually didn't establish cardinality, just something that looks kind of like it.

Intuitively, the problem with the claim that utility "indices actually correspond[] in a meaningful way to how much you prefer one thing to another" is that you could be risk-averse, or risk-loving with respect to welfare, and that would break the correspondence. (Put another way: the indices correspond to how much you prefer one thing to another adjusted for risk - not how much you prefer one thing to another simpliciter.)

Yeah. Here I'm trying to actually justify the existence of a numbering scheme that has the property that "increase of five points of utility is increase of five points of utility (in some set of utility units)", no matter what the state that you're starting and increasing from is.

I need to do this so that I then have a currency I can use in a dutch book style argument to build up the rest of it.

As far as Lexicographic preferences, I had to look that up. Thanks, that's interesting. Maybe doable with hyperreals or such?

As far as risk aversion, um... unless I misunderstand your meaning, that should be easily doable. Simply have really increasingly huge steps of disutiliy as one goes down the preference chain, so even slight possibility of a low rank outcome would be extremely unpreferred?

I'm afraid all of this is all still a bit vague for me, sorry.

Are you familiar with the standard preference representation results in economics (e.g. the sort you'd find in a decent graduate level textbook)? The reason I ask is that the inability to represent lexicographic preferences is pretty well-known, and the fact that you weren't aware of it makes me suspect even more strongly than before that you may be trying to do something that's already been done to death without realizing it.

I think we're talking past each other on the risk aversion front. Probably my fault, as my comment was somewhat vague. (Maybe also an issue of inferential distance.)

More I think about it though, seems like hyperreals, now that I know of them, would let one do a utility function for lexicographic preferences, no?

And nothing for you to apologize for. I mean, if there's this much confusion about what I'm writing, it seems likely that the problem is at my end. (And I fully admit, there's much basic material I'm unfamiliar with)

My criticism may be more of the writing than the concept. Once you establish that utilities obey a >=< relationship with one another, all these properties seem to flow rather cleanly and easily. If there's one thing I've learned from philosophy, it's that you should always be wary of someone who uses a thousand words when a hundred will do.

The properties are interesting and useful, it just seems that the explanation of them is being dragged out to make the process look both more complex and more "objective" than it really is, and that's what I'm wary of.

Well, right at the start, I said we could assign numbers that maintain preference rankings. Here I was trying to establish a meaningful scale, though. A numbering in which the sizes of the steps actually meant something.

"on the same scale"? I first need to explicitly specify what the heck that actually means. I'm doing this partly because when I've looked at some arguments, I'd basically go "but wait! what if...? and what about...? And you're just plain assuming that..." etc. So I'm trying to fill in all those gaps.

Further, ultimately what I'm trying to appeal to is not a blob of arbitrary axioms, but the notion of "don't automatically lose" plus some intuitive notions of "reasonableness"

Obviously, I'm being far less clear than I should be. Sorry about that.

Can you summarise this series of posts as a straightforward mathematical theorem? As someone with a mathematical background, I would find that a lot easier to grasp than this expanse of text. At the moment, I can't tell whether you are writing an exposition of the concepts, hypotheses and reasoning of the Utility Theorem, or doing something different.

Well, I'm not really doing it from an axiomatic perspective, as such, but basically I'm doing a "avoiding being stupid, that is, avoiding automatically losing and such, more or less uniquely leads to Bayesian decision theory"

ie "if an agent isn't acting in accordance with decision theory, they're going to be doing something stupid sooner or later". I'm trying to construct decision theory from this perspective. The basic notions I'm working with are things like Dutch Book arguments and Stephen Omohundro's vulnerability arguments. But I'm filling in bits that I personally had to struggle with, had to go "but wait, what about...?" until I eventually worked out what I felt to be the missing bits.

That's my basic overall intent here. "Intro to decision theory, or, intro to why decision theory is the Right Way". It sure seems though, unfortunately, like I'm not doing that good of a job at it presentation wise, but at least it may be useful as reference material to someone.

Psy-Kosh, you should also use the "summary break" button with posts of this length.

Done. :)