This is part of a sequence on decision analysis; the first post is a primer on Uncertainty.

Decision analysis has two main parts: abstracting a real situation to math, and then cranking through the math to get an answer. We started by talking a bit about how probabilities work, and I'll finish up the inner math in this post. We're working from the inside out because it's easier to understand the shell once you understand the kernel. I'll provide an example of prospects and deals to demonstrate the math, but first we should talk about axioms. In order to be comfortable with using this method, there are five axioms1 you have to agree with, and if you agree with those axioms, then this method flows naturally. They are: Probability, Order, Equivalence, Substitution, and Choice.

Probability

You must be willing to assign a probability to quantify any uncertainty important to your decision. You must have consistent probabilities.

Order

You must be willing to order outcomes without any cycles. This can be called transitivity of preferences: if you prefer A to B, and B to C, you must prefer A to C.

Equivalence

If you prefer A to B to C, then there must exist a p where you are indifferent between a deal where you receive B with certainty and a deal where you receive A with probability p and C otherwise.

Substitution

You must be willing to substitute an uncertain deal for a certain deal or vice versa if you are indifferent between them by the previous rule. Also called "do you really mean it?"

Choice

If you have a choice between two deals, both of which offer A or C, and you prefer A to C, then you must pick the deal with the higher probability of A.

 

These five axioms correspond to five actions you'll take in solving a decision problem. You assign probabilities, then you order outcomes, then you determine equivalence so you can substitute complicated deals for simple deals, until you're finally left with one obvious choice.

You might be uncomfortable with some of these axioms. You might say that your preferences genuinely cycle, or you're not willing to assign numbers to uncertain events, or you want there to be an additional value for certainty beyond the prospects involved. I can only respond that these axioms are prescriptive, not descriptive: you will be better off if you behave this way, but you must choose to.

Let's look at an example:

My Little Decision
 

Suppose I enter a lottery for MLP toys. I can choose from two kinds of tickets: an A ticket has a 1/3 chance of giving me a Twilight Sparkle, a 1/3 chance of giving me an Applejack, and a 1/3 chance of giving me a Pinkie Pie. A B ticket has a 1/3 chance of giving me a Rainbow Dash, a 1/3 chance of giving me a Rarity, and a 1/3 chance of giving me a Fluttershy. There are two deals for me to choose between- the A ticket and the B ticket- and six prospects, which I'll abbreviate to TS, AJ, PP, RD, R, and FS.

(Typically, decision nodes are represented as squares, and work just like uncertainty nodes, and so A would be above B with a decision node pointing to both. I've displayed them side by side because I suspect it looks better for small decisions.)

The first axiom- probability- is already taken care of for us, because our model of the world is already specified. We are rarely that lucky in the real world. The second axiom- order- is where we need to put in work. I need to come up with a preference ordering. I think about it and come up with the ordering TS > RD > R = AJ > FS > PP. Preferences are personal- beyond requiring internal consistency, we shouldn't require or expect that everyone will think Twilight Sparkle is the best pony. Preferences are also a source of uncertainty if prospects satisfy multiple different desires, as you may not be sure about your indifference tradeoffs between those desires. Even when prospects have only one measure, that is, they're all expressed in the same unit (say, dollars), you could be uncertain about your risk sensitivity, which shows up in preference probabilities but deserves a post of its own.

Now we move to axiom 3: I have an ordering, but that's not enough to solve this problem. I need a preference scoring to represent how much I prefer one prospect to another. I might prefer cake to chicken and chicken to death, but the second preference is far stronger than the first! To determine my scoring I need to imagine deals and assign indifference probabilities. There are a lot of ways to do this, but let's jump straight to the most sensible one: compare every prospect to a deal between the best and worst prospect.2

I need to assign a preference probability p such that I'm indifferent between the two deals presented: either RD with certainty, or a chance at TS (and PP if I don't get it). I think about it and settle on .9: I like RD close to how much I like TS.3 This indifference needs to be two-way: I need to be indifferent about trading a ticket for that deal for a RD, and I need to be indifferent about trading a RD for that deal.4 I repeat this process with the rest, and decide .6 for R and AJ and .3 for FS. It's useful to check and make sure that all the relationships I elicited before hold- I prefer R and AJ the same, and the ordering is all correct. I don't need to do this process for TS or PP, as p is trivially 1 or 0 in that case.

Now that I have a preference scoring, I can move to axiom 4. I start by making things more complicated- I take all of the prospects that weren't TS or PP and turn them into deals of {p TS, 1-p PP}. (Pictured is just the expansion of the right tree; try expanding the tree for A. It's much easier.)

Then, using axiom 1 again, I rearrange this tree. The A tree (not shown) and B tree now have only two prospects, and I've expressed the probabilities of those prospects in a complicated way that I know how to simplify.

And we have one last axiom to apply: choice. Deal B has a higher chance of the better prospect, and so I pick it. Note that that's the case even though my actual chance of receiving TS with deal B is 0%- this is just how I'm representing my preferences, and this computation is telling me that my probability-weighted preference for deal B is higher than my probability-weighted preference for deal A. Not only do I know that I should choose deal B, but I know how much better deal B is for me than deal A.5

This was a toy example, but the beauty of this method is that all calculations are local. That means we can apply this method to a problem of arbitrary size without changes. Once we have probabilities and preferences for the possible outcomes, we can propagate those from the back of the tree through every node (decision or uncertainty) until we know what to do everywhere. Of course, whether the method will have a runtime shorter than the age of the universe depends on the size of your problem. You could use this to decide which chess moves to play against an opponent whose strategy you can guess from the board configuration, but I don't recommend it.6 Typical real-world problems you would use this for are too large to solve with intuition but small enough that a computer (or you working carefully) can solve it exactly if you give it the right input.

Next we start the meat of decision analysis: reducing the real world to math.

 


1. These axioms are Ronald Howard's 5 Rules of Actional Thought.

2. Another method you might consider is comparing a prospect to its neighbors; RD in terms of TS and R, R in terms of RD and FS, FS in terms of R and PP. You could then unpack those into the preference probability

3. Assigning these probabilities is tough, especially if you aren't comfortable with probabilities. Some people find it helpful to use a probability wheel, where they can see what 60% looks like, and adjust the wheel until it matches what they feel. See also 1001 PredictionBook Nights and This is what 5% feels like.

4. In actual practice, deals often come with friction and people tend to be attached to what they have beyond the amount that they want it. It's important to make sure that you're actually coming up with an indifference value, not the worst deal you would be willing to make, and flipping the deal around and making sure you feel the same way is a good way to check.

5. If you find yourself disagreeing with the results of your analysis, double check your math and make sure you agree with all of your elicited preferences. An unintuitive answer can be a sign of an error in your inputs or your calculations, but if you don't find either make sure you're not trying to start with the bottom line.

6. There are supposedly 10120 possible games of chess, and this method would evaluate all of them. Even with computation-saving implementation tricks, you're better off with another algorithm.

New to LessWrong?

New Comment
63 comments, sorted by Click to highlight new comments since: Today at 10:25 PM

My Little Pony was clearly the correct choice, here.

I disagree. If one is not familiar with this opus, one shall have to expend more attention than would be suitable on matching the illustrations on the probability trees to their names and abbreviations.

Talk about inferential distance; for some reason it didn't even occur to me that people might not know their names (whereas I did think long and hard before going with that as my only example). I'll edit the pictures to include the names (tomorrow).

it didn't even occur to me that people might not know their names

Seriously? I see these damned horses for the first time in life. Not only I don't know their names, but it is hard to distinguish them visually. And I put quite a big probability on the hypothesis that 95% of people whom I personally know are unfamiliar with this as well.

whereas I did think long and hard before going with that as my only example

Couldn't you go with apple, pizza, bicycle, broken watch, pen, or whatever sort of items whose names are known? It would perhaps be too ordinary, but your present choice really makes it harder.

Seriously?

Yes. I'm not sure why.

Couldn't you go with apple, pizza, bicycle, broken watch, pen, or whatever sort of items whose names are known? It would perhaps be too ordinary, but your present choice really makes it harder.

Future posts will not re-use this example. I wanted to use lots of pictures for this one to make sure the steps were clear, and found it easier to motivate myself to do that with ponies.

And I put quite a big probability on the hypothesis that 95% of people whom I personally know are unfamiliar with this as well.

I didn't know them either, but this post was one of the last straws that broke the back of my resistance made me start watching the series.

(It's good, by the way!)

How hard are they to distinguish visually? They are each dominated by a single (unique) color. Is my model of visual perception wrong?

The colours are "similar", in that they are all pastel tones, and I have probably not good memory for colours. Now when I am looking on the ponies for the third or fourth time, I am starting to "feel the difference", but at the beginning, I saw six extremely similar ugly pictures.

My explanation probably isn't very good, but visual impressions are difficult to describe verbally.

Hmm, that's interesting, because I personally thought that the ponies were well designed. They each have a unique color scheme, and a distinct silhouette. Guess I was wrong, though.

I don't say they are not well designed. Perhaps this is the best they could do while maintaining their artistic style, or they didn't try to optimise distinctiveness in the first place.

Seconding prase: "Seriously? I see these damned horses for the first time in life. Not only I don't know their names, but it is hard to distinguish them visually."

Make that "most of these damned horses" and "almost for the first time". I have kids, and they watch TV, so I'm vaguely aware of these things in much the same way I'm vaguely aware of cars honking outside: it's a somewhat unwelcome but not actively unpleasant awareness.

(Also the MLP allusion joke has already been done here before. Without pictures, admittedly, but I'd still judge it a "once and only once" kind of joke.)

Pictures edited.

I agree. I'm halfway through, and I really wish I had a handy key of which initials go with which ponies - or just those initials next to every instance of every pony. right now, I have to track lots of extra details that have little bearing on the material.

TS - purple/purple, AJ - orange/yellow, PP - pink/pink,

RD - blue/rainbow, R - white/purple, FS - yellow/pink

Eh. I'm not familiar with this opus, and I managed.

This was a toy example

ba dum tish!

I strongly considered linking to this in the article.

Upvoted.

It seems, though, that the system discussed here should be prefaced by a scope. In particular, the scope for preferences about outcomes would include outcomes that you can assign a dollar value to. Outside the scope, for example, would be outcomes that you are willing to sacrifice your life for. The equivalence axiom implies a scalar metric applying to the whole scope.

You're right that this is an important thing to discuss, and so we'll talk about this more explicitly when I get to micromorts, which are probably one of the more useful concepts that comes up in an introductory survey. (I think it's better to establish this methodology and show that it applies to those situations too than argue it applies everywhere then establish what it actually is.)

A quick answer, though: my sense is that people feel like they have categorical preferences, but actually have tradeoff preferences. If you ask people about chicken vs. p cake or 1-p death, many will scoff at the idea of taking any chance at death to upgrade from chicken to cake. When you look at actual behavior, though, those same people do risk death to upgrade from chicken to cake- and so in some sense that's "worth sacrificing your life for." The difference between the cake > chicken preference and the family alive > family dead preference, for example, seems to be one of degree and not one of kind.

these axioms are prescriptive, not descriptive: you will be better off if you behave this way, but you must choose to...

When you look at actual behavior, though...

Are you prescribing a rational method of decision making or are you describing actual behavior (possibly not rational)?

Are you prescribing a rational method of decision making or are you describing actual behavior (possibly not rational)?

Although Vaniver punted this by saying that you shouldn't judge ultimate values on rationality, you're right that this describes behaviour (and not values directly), and inferring values from behaviour assumes rationality.

However … it's intuitively obvious to me that it's perfectly rational to (in some circumstances) to go the store and get some cake when all that you have at home is chicken, even though leaving the house increases your risk of death (car crash etc). I assumed that this is the sort of behaviour that Vaniver was referring to; do you agree that it's rational (and so we can infer values in line with the axiom of equivalence)?

Values are endogenous to this system, and so it's not clear to me what it would mean to prescribe rational values. I think there is some risk of death small enough that accepting it in exchange for upgrading from chicken to cake is rational behavior for real people (who prefer cake to chicken), and difficulty in accepting that statement is generally caused by difficulty imagining small probabilities, not a value disagreement.

If you actually do have categorical preferences, you can implement them by multiple iterations of the method: "Minimize chance of dying, then maximize flavor on the optimal set."

Vaniver & Toby,

I have no problem with accepting the axiom of equivalence for a usefully large subset of all preferences. It just seems that there may be some preferences which are 'priceless' as compared to chicken vs. cake.

It just seems that there may be some preferences which are 'priceless' as compared to chicken vs. cake.

There are definitely signalling reasons to believe this is so. If I work at a factory where management has put a dollar value on my accidental death due to workplace hazards and uses that value in decision-making, I might feel less comfortable than if I worked at a factory where management insists that the lives of its employees are priceless, because at the first factory the risk of my death is now openly discussed (and management might be callous).

I'm not sure there are decision-making reasons to believe this is so. The management of the second factory continue to operate it, even though there is some risk of accidental death, suggesting that their "priceless" is not "infinity" but "we won't say our price in public." They might perform a two-step optimization: "ensure risk of death is below a reasonable threshold at lowest cost," but it's not clear that will result in a better solution.* It may be the risk of death could be profitably lowered below the reasonable threshold, and they don't because there's no incentive to, or they may be spending more on preventing deaths than the risk really justifies. (It might be better to spend that money, say, dissuading employees from smoking if you value their lifespans rather than not being complicit in their death.)

*For example, this is how the EPA manages a lot of pollutants, and it is widely criticized by economists for being a cap rather than a tax, because it doesn't give polluters the right incentives. So long as you're below the EPA standards, there's no direct benefit to you for halving your pollution even though there is that benefit to the local environment.

Upvote :-)

Followup questions spring to mind... Is there standard software for managing large trees of this sort? Is any of it open source? Are there file formats that are standard in this area? Do any posters (or lurkers who could be induced to start an account to respond) personally prefer particular tools in real life?

Actionable advice would be appreciated!

There's a program called Flying Logic that makes it really easy to draw trees and set up mathematical calculations -- sort of a DAG-based spreadsheet sort of thing, with edge weights and logical operators and whatnot. It's marketed mainly for doing cause-and-effect type analyses (using the Theory of Constraints "Thinking Processes") but it can do Bayesian belief nets and other things.

I've played with the demo and it's quite easy to use, but to be honest I didn't use the logical-spreadsheet functionality that much, except to the extent the tutorials show you how to play with confidence values and get the outputs of a fuzzy logic computation.

It's definitely not free OR open source, though.

There's a Stanford online course next semester called Probabilistic Graphical Models that will cover different ways of representing this sort of problem. I'm enrolled.

A while back I took a class called Aiding and Understanding Human Decision Making. It was a lot like this (aka not really my thing, but ces las vie). I don't remember what software we used. I remember the software confused the dickens out of me, and I preferred just running all the various algorithms by hand. Don't remember what it was though.

Do to my intense powers of Google-fu, I found the class website, but it's been updated since I took it. Mentions both @Risk and Crystal Ball

Actually, it looks like before the current prof taught it, it was taught by a different prof. His course site is here and includes links to lots of good articles (that you don't need access to databases to view)

ETA: On further examination, that class website is older (rather than newer) than the class I took and slightly different (the one I took was HFE 890 in 2010 I think. That website is HFE 742 in 2006), so the software mentioned might be similarly outdated.

or you want there to be an additional value for certainty beyond the prospects involved. I can only respond that these axioms are prescriptive, not descriptive

I prescribe that people not follow all of the axioms, if they don't feel like it. Especially when the order and equivalence axioms are fleshed out with an implication of strict indifference. Some relevant discussion .

Thank you for this article. Can you please guide me on how did you simplify and compute the probability of A & B in the final step.

That is, where did 8/15 and 9/15 come from?

Let's step through the B case. I only need to track probability of TS, because probability of PP is 1 minus that. The RD turns into .9/3 = 9/30ths, the R turns into 6/30ths, and the FS turns into 3/30ths. Add those together and you get 9+6+3=18, and 18/30 simplifies to 9/15ths.

What about the A case? Here the underlying probabilities are 1, .6, and 0, so 10/30, 6/30, and 0/30. 16/30 simplifies to 8/15ths.

This recursive expected value calculation is what I implemented to solve my coinflip question. There's a link to the Python code in that post for anyone who is curious about implementation.

[+][anonymous]12y-60