You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Stupid Questions Open Thread Round 4 - Less Wrong Discussion

6 Post author: lukeprog 27 August 2012 12:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 27 August 2012 12:20:24AM *  8 points [-]

I finally decided it's worth some of my time to try to gain a deeper understanding of decision theory...

Question: Can Bayesians transform decisions under ignorance into decisions under risk by assuming the decision maker can at least assign probabilities to outcomes using some kind of ignorance prior(s)?

Details: "Decision under uncertainty" is used to mean various things, so for clarity's sake I'll use "decision under ignorance" to refer to a decision for which the decision maker does not (perhaps "cannot") assign probabilities to some of the possible outcomes, and I'll use "decision under risk" to refer to a decision for which the decision maker does assign probabilities to all of the possible outcomes.

There is much debate over which decision procedure to use when facing a decision under ignorance when there is no act that dominates the others. Some proposals include: the leximin rule, the optimism-pessimism rule, the minimax regret rule, the info-gap rule, and the maxipok rule.

However, there is broad agreement that when facing a decision under risk, rational agents maximize expected utility. Because we have a clearer procedure for dealing with decisions under risk than we do for dealing with decisions under ignorance, many decision theorists are tempted to transform decisions under ignorance into decisions under risk by appealing to the principle of insufficient reason: "if you have literally no reason to think that one state is more probable than another, then one should assign equal probability to both states."

And if you're a Bayesian decision-maker, you presumably have some method for generating ignorance priors, whether or not that method always conforms to the principle of insufficient reason, and even if you doubt you've found the final, best method for assigning ignorance priors.

So if you're a Bayesian decision-maker, doesn't that mean that you only ever face decisions under risk, because at they very least you're assigning ignorance priors to the outcomes for which you're not sure how to assign probabilities? Or have I misunderstood something?

Comment author: cousin_it 27 August 2012 01:17:54AM *  7 points [-]

What AlexMennen said. For a Bayesian there's no difference in principle between ignorance and risk.

One wrinkle is that even Bayesians shouldn't have prior probabilities for everything, because if you assign a prior probability to something that could indirectly depend on your decision, you might lose out.

A good example is the absent-minded driver problem. While driving home from work, you pass two identical-looking intersections. At the first one you're supposed to go straight, at the second one you're supposed to turn. If you do everything correctly, you get utility 4. If you goof and turn at the first intersection, you never arrive at the second one, and get utility 0. If you goof and go straight at the second, you get utility 1. Unfortunately, by the time you get to the second one, you forget whether you'd already been at the first, which means at both intersections you're uncertain about your location.

If you treat your uncertainty about location as a probability and choose the Bayesian-optimal action, you'll get demonstrably worse results than if you'd planned your actions in advance or used UDT. The reason, as pointed out by taw and pengvado, is that your probability of arriving at the second intersection depends on your decision to go straight or turn at the first one, so treating it as unchangeable leads to weird errors.

Comment author: Vladimir_Nesov 27 August 2012 02:22:35PM *  0 points [-]

One wrinkle is that even Bayesians shouldn't have prior probabilities for everything, because if you assign a prior probability to something that could indirectly depend on your decision, you might lose out.

... your probability of arriving at the second intersection depends on your decision to go straight or turn at the first one, so treating it as unchangeable leads to weird errors.

"Unchangeable" is a bad word for this, as it might well be thought of as unchangeable, if you won't insist on knowing what it is. So a Bayesian may "have probabilities for everything", whatever that means, if it's understood that those probabilities are not logically transparent and some of the details about them won't necessarily be available when making any given decision. After you do make a decision that controls certain details of your prior, those details become more readily available for future decisions.

In other words, the problem is not in assigning probabilities to too many things, but in assigning them arbitrarily and thus incorrectly. If the correct assignment of probability is such that the probability depends on your future decisions, you won't be able to know this probability, so if you've "assigned" it in such a way that you do know what it is, you must have assigned a wrong thing. Prior probability is not up for grabs etc.

Comment author: DanielLC 27 August 2012 04:27:53AM 0 points [-]

so treating it as unchangeable leads to weird errors.

The prior probability is unchangeable. It's just that you make your decision based on the posterior probability taking into account each decision. At least, that's what you do if you use EDT. I'm not entirely familiar with the other decision theories, but I'm pretty sure they all have prior probabilities for everything.