Comment author: Dagon 30 January 2012 10:49:18PM 0 points [-]

Yes. I mean that, when your choice is different from what standard (or for some cases, timeless) decision theory calculates for the same prior beliefs and outcome->utility mapping, you're losing utility. I can't tell if you think that this theory does have different outcomes, or if you think that this is "just" a simplification that gives the same outcomes.

Comment author: fool 30 January 2012 10:56:34PM 0 points [-]

I replied to Manfred with the Ellsberg example having 31 instead of 30 red balls. Does that count as different? If so, do I lose utility?

Comment author: Will_Sawin 30 January 2012 10:06:21PM 0 points [-]

I don't get what this range signifies. There should be a data point about how ambiguous it is, which you could use or not use to influence actions. (For instance, if someone says they looekd in the urn and it seemed about even, that reduced ambiguity.) But then you want to convert that into a range, which does not refer to the actual range of frequencies (which could be 1/3 +- 1/3) and is dependent on your degree of aversion, but then you want to convert that into a decision?

Comment author: fool 30 January 2012 10:27:51PM 0 points [-]

Well, in terms of decisions, P(green) = 1/3 +- 1/9 means that I'd buy a bet on green for the price of a true randomised bet with probability 2/9, and sell for 4/9, with the caveats mentioned.

We might say that the price of a left boot is $15 +- $5 and the price of a right boot is $15 -+ $5.

Comment author: Dagon 29 January 2012 08:04:16PM 3 points [-]

Showing that it can't be pumped just means that it's consistent. It doesn't mean it's correct. Consistently wrong choices cost utility, and are not rational.

I couldn't tell what position you're taking. Is ambiguity aversion something more than just a common bias? Personally, I understand the intuitive appeal, but on reflection I am indifferent to a bet on red-or-blue or on green-or-blue. If I have additional information about the distribution of green and blue (including knowing anything about who's offering the wagers, or in some cases, that the wager is offered at all), then I'll update my prior of equal probability of blue or green.

Am I doing it wrong? should I prefer one or the other? How much should I prefer it (how much, as a fraction of the bet, should I pay to be allowed to make the bet I prefer)?

For your other examples, boots' rule and yo mama, both are trivial cases of decomposing "value" too far. There's nothing inconsistent about a mother preferring to have a visible fairness mechanism, and to follow that mechanism when it is present. There's nothing wrong with valuing left and right boots differently based on your expected needs. Neither of these have ANYTHING to do with probability calculation, they're just a way to point out that value is not simple. Neither of these present any difficulty to standard decision theory.

I can't tell as of yet whether intervals are helpful in calculating something (and if so, what) that always gets the same answer as the simpler (to me) Bayesian calculation, or if they're able to make different (and better) beliefs in some case.

I would greatly love an example that compares a plain Bayesian analysis with an interval analysis. Start with a prior and a prior-interval, update both based on some discovery, and then propose a wager where the two methods give different beliefs.

Comment author: fool 30 January 2012 10:12:47PM 0 points [-]

Showing that it can't be pumped just means that it's consistent. It doesn't mean it's correct. Consistently wrong choices cost utility, and are not rational.

To be clear: you mean that my choices somehow cost utility, even if they're consistent?

I would greatly love an example that compares a plain Bayesian analysis with an interval analysis.

It's a good idea. But at the moment I think more basic questions are in dispute.

Comment author: Will_Sawin 29 January 2012 08:10:00PM 0 points [-]

There seems to be an issue of magnitude here. There are 3 possible ways the urn can be filled:

  1. It could be selected uniformly at random
  2. It could be selected through some unknown process: uniformly at random, biased against me, biased towards blue, biased towards green, always exactly 30/30, etc.
  3. It could be selected so as to exactly minimize my profits

2 seems a lot more like 1 than it does like 3. Even without using any Bayesian reasoning, a range is a lot more like the middle of the range than it is like one end of the range.

(This argument seems to suggest a "common-sense human" position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)

An alternative way of talking about it:

The point I am making is that it is much more clear which direction my new information is supposed to influence you then your information is supposed to influence me. If a variable x is in the range [0,1], finding out that it is actually 0 is very strongly biasing information. For instance, almost every value x could have been before is strictly higher than the new known value. But finding out that it is 1/2 does not have a clear direction of bias. Maybe it should make you switch to more confidently betting x is high, maybe it should make you switch to more confidently betting x is low. I don't know, it depends on details of the case, and is not very robust to slight changes in the situation.

Comment author: fool 30 January 2012 09:57:57PM 1 point [-]

(This argument seems to suggest a "common-sense human" position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)

Well then, P(green) = 1/3 +- 1/3 would be extreme ambiguity aversion (such as would match the adversary I think you are proposing), and P(green) = 1/3 exactly would be no ambiguity aversion , so something like P(green) = 1/3 +- 1/9 would be such a compromise, no? And why is that untenable?

To clarify: the aversary you have in mind, what powers does it have, exactly?

Generally speaking, an adversary would affect my behaviour, unless the loss of ambiguity aversion from the fact that all probabilities are known were exactly balanced by the gain in ambiguity aversion from the fact that said probabilities are under control of a (limited) aversary.

(Which is similar to saying that finding out the true distribution from which the urn was drawn would indeed affect your behaviour, unless you happened to find that the distribution was the prior you had in mind anyway.)

Comment author: Manfred 29 January 2012 06:55:34PM 1 point [-]

I agree, I don't think there is any way to dutch-book someone for being wrong but consistent with the laws of probability (that is, still assigning 1/3 probabilities to r,g,b even when that's wrong). They simply lose money on average. But this is an extra fact, unrelated to the triviality that is not being able to dutch-book someone based on an arbitrary choice between two equivalent options. Once they start paying for equivalent options, then they get money-pumped.

Comment author: fool 30 January 2012 09:49:19PM 0 points [-]

Once they start paying for equivalent options, then they get money-pumped.

Okay. Suppose there is an urn with 31 red balls, and 60 balls that are either green or blue. I choose to bet on red over green, and green-or-blue over red-or-blue. These are no longer equivalent options, and this is definitely not consistent with the laws of probability. Agreed?

(My prior probability interval is P(red) = 31/91 exactly, P(green) = (1/2 +- 1/6)(60/91), P(blue) = (1/2 -+ 1/6)(60/91).)

It sounds like you expected (and continue to expect!) to be able to money-pump me.

Comment author: endoself 29 January 2012 01:27:03AM *  1 point [-]

If I had set P(green) = 1/3 +- 1/3, then yes. But in this case I'm not ambiguity averse to the extreme, like I mentioned. P(green) = 1/3 +- 1/9 was what I had, i.e. (1/2 +- 1/6)(2/3). The tie point would be 20 red balls, i.e. 1/4 exactly versus (1/2 +- 1/6)(3/4).

Well utility is invariant under positive affine transformations, so you could have 30U +- 10U and shift the origin so you have 10U +- 10U. More intuitively, if you have 30U +- 10U, you can regard this as 20U + (20U,0U) and you would be willing to trade this for 21U, but you're guaranteed the first 20U and you would think it's excessive to trade (20U,0U) for just 1U.

Maybe. Though I put it to you that the mother wants nothing more than what is "best for her children". Even if we did agree with her about what his best for each child separately, we might still disagree with her about what is "best for her children".

Perhaps I just want the "best chance of winning".

Interesting.

(ADDED:) If it helps, I don't think the fact that it is she making the decision is the issue - she would wish the same thing to happen if her children were in someone else's care.

What if they were in the care of her future self who already flipped the coin? Why is this different?

Bonus scenario: There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?

Comment author: fool 29 January 2012 03:54:54PM *  0 points [-]

you would think it's excessive to trade (20U,0U) for just 1U.

What bet did you have in mind that was worth (20U,0) ? One of the simplest examples, if P(green) = 1/3 +- 1/9, would be 70U if green, -20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)

What if they were in the care of her future self who already flipped the coin? Why is this different?

This I don't understand. She is her future self isn't she?

Bonus scenario:

Oh boy!

There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?

So there are two urns, one coin is going to be flipped. No matter what I'm offered a randomised bet on the second urn. If the coin comes up heads I'll be offered a bet on green on the first urn, if the coin comes up tails I'll be offered a bet on blue on the first urn. So looks like my options are:

A) choose urn 1 either way

B) choose urn 1 (i.e. green) if the coin comes up heads, choose urn 2 if the coin comes up tails

C) choose urn 2 if the coin comes up heads, choose urn 1 (i.e. blue) if the coin comes up tails

D) choose urn 2 either way

And to be pedantic: E) flip my own coin to randomise between options B and C.

I am indifferent between A, D, and E, which I prefer to B or C.

Generally, we seem to be really overanalysing the phrase "ought to flip a coin".

Comment author: Manfred 28 January 2012 11:01:34PM 2 points [-]

The Ellsberg paradox is boring because it can have no possible effect on utility or the log score of your beliefs (note that this means it should not be conflated with the mother with the candy). This was, in fact, the main criticism of the previous post. Given that, the fact that it can't lead to being dutch booked is trivial. I would rather this post was a lot shorter and had fewer generalizations.

Comment author: fool 29 January 2012 03:41:54PM 1 point [-]

I'm not sure what you mean. If it's because the situation was too symetrical, I think I adressed that.

For example, you could add or remove a couple of red balls. I still choose red over green, and green-or-blue over red-or-blue. I think the fact that it still can't lead to being dutch booked is going to be a surprise to many LW readers.

Comment author: Will_Sawin 29 January 2012 07:53:06AM 1 point [-]

In which direction should it change my behavior? What does it push me towards?

Comment author: fool 29 January 2012 03:38:54PM *  2 points [-]

Well, it would push me away from ambiguity aversion, I would become indifferent between a bet on red and a bet on green, etc.

Put it another way: a frequentist could say to you: "Your Bayesian behaviour is a perfect frequentist model of a situation where:

  1. You choose a bet

  2. An urn is selected uniformly at random from the fictional population

  3. An outcome occurs.

It seems totally unreasonable to apply it in the Ellsberg situation or similar ones. For instance, you would then not react if you were in fact told the distribution."

And actually, as it happens, this isn't too far from the sort of things you do hear in frequentist complaints about Bayesianism. You presumably reject this frequentist argument against you.

And I reject your Bayesian argument against me.

Comment author: Dagon 29 January 2012 07:44:54AM 0 points [-]

Switching between utils and dollars is very confusing. You should be very clear when you do so, or you risk getting confused between cases of risk aversion with simple declining marginal utility.

If money has logarithmic value to you, then existing unresolved bets SHOULD have an effect on your preferences even if you're risk-neutral.

Comment author: fool 29 January 2012 03:29:20PM 1 point [-]

If money has logarithmic value to you, you are not risk neutral, the way I understand the term. How are you using the term?

Comment author: endoself 28 January 2012 11:33:53PM *  0 points [-]

OK, now I understand why this is a necessary part of the framework.

I do think there is a problem with strictly choosing the lesser of the two utilities. For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?

On a more abstract note, your stated reasons for your decision seem to be that you actually care about what might have happened for reasons other than the possibility of it actually happening (does this make sense and accurately describe your position?). I don't think humans actually care about such things. Probability is in the mind; a difference in what might have happened is a difference in states of knowledge about states of knowledge. A sentence like "I know now that my irresponsible actions could have resulted in injuries or deaths" isn't actually true given determinism, it's about what you know believe you should have known in the past. [1] [2]

Getting back to the topic, people's desires about counterfactuals are desires about their own minds. What Irina and Joey's mother wants is to not intend to favour either of her children. [3] In reality, the coin is just as determininstic as her decision. Her preference for randomness is about her mind, not reality.

[1] True randomness like that postulated by some interpretations of QM is different and I'm not saying that people absolutely couldn't have preferences about truly random couterfactuals. Such a world would have to be pretty weird though. It would have to be timeful, for instance, since the randomness would have to be fundamentally indeterminite before it happens, rather than just not known yet, and timeful physics doesn't even make sense to me.

[2] This is itself a counterfactual, but that's irrelevent for this context.

[3] Well, my model of her prefers flipping a coin to drawing green or blue balls from an urn, but my model of her does not agree with me on a lot of things. If she were a Bayesian decision theorist, I would expect her to be indifferent between the coin and the urn, but prefer either to having to choose for herself.

Comment author: fool 29 January 2012 12:17:53AM *  1 point [-]

For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?

If I had set P(green) = 1/3 +- 1/3, then yes. But in this case I'm not ambiguity averse to the extreme, like I mentioned. P(green) = 1/3 +- 1/9 was what I had, i.e. (1/2 +- 1/6)(2/3). The tie point would be 20 red balls, i.e. 1/4 exactly versus (1/2 +- 1/6)(3/4).

On a more abstract note, your stated reasons for your decision seem to be that you actually care about what might have happened for reasons other than the possibility of it actually happening (does this make sense and accurately describe your position?).

It makes sense, but I don't feel this really describes me. I'm not sure how to clarify. Maybe an analogy:

What Irina and Joey's mother wants is to not intend to favour either of her children.

Maybe. Though I put it to you that the mother wants nothing more than what is "best for her children". Even if we did agree with her about what his best for each child separately, we might still disagree with her about what is "best for her children".

Perhaps I just want the "best chance of winning".

(ADDED:) If it helps, I don't think the fact that it is she making the decision is the issue - she would wish the same thing to happen if her children were in someone else's care.

View more: Prev | Next