Comment author: endoself 13 February 2012 05:14:00AM 0 points [-]

To quote the article you linked: "Jaynes certainly believed very firmly that probability was in the mind ... there was only one correct prior distribution to use, given your state of partial information at the start of the problem."

I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.

At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).

Well, you'd have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.

Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a "state variable". In unambiguous situations the choice is "stateless".

Given an interval, your preferences obey transitivity even though ambiguity doesn't, right? I don't think that nontransitivity is the problem here; the thing I don't like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.

I'm not really sure what a lot of this means

Sorry about that. Maybe I've been clearer this time around?

I only mean that middle paragraph, not the whole comment.

Comment author: fool 23 February 2012 08:20:12PM 1 point [-]

If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it "objective". It is "objective" in that it looks like the sort of thing that Bayesians call "objective" priors.

Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can't apply max entropy now. That's ok: apply max entropy "retroactively" and run the usual update process to get your initial probabilities.

So we could normally start the state variable at the "natural value" (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.

This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.

Comment author: Stuart_Armstrong 13 February 2012 11:58:24AM 1 point [-]

But it has everything to do with ambiguity aversion: the trade only fails because of it. If we reach into the system, and remove ambiguity aversion for this one situation, then we end up unarguably better (because of the symmetry).

Yes, sometimes the subsidy will be so high that even the ambiguity averse will trade, or sometimes so low that even Bayesians won't trade; but there will always be a middle ground where Bayesians win.

As I said elsewhere, ambiguity aversion seems like the combination of an agent who will always buy below the price a Bayesian would pay, and another who will always sell above the price a Bayesian would pay. Seen like that, your case that they cannot be arbitraged is plausible. But a rock cannot be arbitraged either, so that's not sufficient.

This example hits the ambiguity averter exactly where it hurts, exploiting the fact that there are deals they will not undertake either as buyer or seller.

Comment author: fool 23 February 2012 08:11:27PM 2 points [-]

No, (un)fortunately it is not so.

I say this has nothing to do with ambiguity aversion, because we can replace (1/2, 1/2+-1/4, 1/10) with all sorts of things which don't involve uncertainty. We can make anyone "leave money on the table". In my previous message, using ($100, a rock, $10), I "proved" that a rock ought to be worth at least $90.

If this is still unclear, then I offer your example back to you with one minor change: the trading incentive is still 1/10, and one agent still has 1/2+-1/4, but instead the other agent has 1/4. The Bayesian agent holding 1/2+-1/4 thinks it's worth more than 1/4 plus 1/10, so it refuses to trade. Whereas the ambiguity averse agents are under no such illustion.

So, the boot's on the other foot: we trade, and you don't. If your example was correct, then mine would be too. But presumably you don't agree that you are "leaving money on the table".

Comment author: endoself 11 February 2012 01:59:00AM *  0 points [-]

Would a single ball that is either green or blue work?

That still seems like a structureless event.

Okay.

I think you're really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind.

Of course even for Bayesians there are equiprobable options, so decisions can't be entirely a function of probability.

Well, once you assign probabilities to everything, you're mostly a Bayesian already. I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one's knowledge about the possible outcomes.

More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.

Aren't you violating the axiom of independence but not the axiom of transitivity?

I'd say the former is your key proposition. It would be sufficient to rule out that an agent's internal variables, like the virtual interval, could have any effect.

I'm not really sure what a lot of this means. The virtual interval seems to me to be subjectively objective in the same way probability is. Also, do you mean 'could have any effect' in the normative sense of an effect on what the right choice is?

Comment author: fool 12 February 2012 06:15:54PM 0 points [-]

I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one's knowledge about the possible outcomes.

To quote the article you linked: "Jaynes certainly believed very firmly that probability was in the mind ... there was only one correct prior distribution to use, given your state of partial information at the start of the problem."

I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.

At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).

Aren't you violating the axiom of indepentence but not the axiom of transitivity?

My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.

Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.

Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a "state variable". In unambiguous situations the choice is "stateless".

I'm not really sure what a lot of this means

Sorry about that. Maybe I've been clearer this time around?

Comment author: Stuart_Armstrong 10 February 2012 02:17:01PM 0 points [-]

How does this work, then? Can you justify that the bonus is free without circularity?

For two agents, I can.

Imagine a setup with two agents, otherwise identical, except that one owns a 1/2+-1/4 bet and the other owns 1/2. A government agency wishes to promote trade, and so will offer 0.1 to any agents that do trade (a one-off gift).

If the two agents are Bayesian, they will trade; if they are ambiguity averse, they won't. So the final setup is strictly identical to the start one (two identical agents, one owning 1/2+- 1/4, one owning 1/2) except that the Bayesian are each 0.1 richer.

Comment author: fool 12 February 2012 05:29:16PM 1 point [-]

Right, except this doesn't seem to have anything to do with ambiguity aversion.

Imagine that one agent owns $100 and the other owns a rock. A government agency wishes to promote trade, and so will offer $10 to any agents that do trade (a one-off gift). If the two agents believe that a rock is worth more than $90, they will trade; if they don't, they won't, etc etc

Comment author: Stuart_Armstrong 09 February 2012 12:32:36PM 0 points [-]

Incidentally, if you determine the contents of the urn by flipping a coin for each of the 60 balls to determine whether it is green or blue, then this matters to the Bayesian too -- this gives you the binomial prior, whereas I think most Bayesians would want to use the uniform prior by default. Doesn't affect the first draw, but it would affect multiple draws

But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.

From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (plus a small fee), wherease the ambiguity-averter's required spread is pushed up by the ambiguity (so all other agents will shop with the Bayesian).

Also, the ambiguity-averter has to keep track of more connected trades than a Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant; but trades that are truly independent of each other (in utility terms) can be treated so by a Bayesian but not by an ambiguity-averter.

Comment author: fool 10 February 2012 03:21:46AM *  1 point [-]

But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.

How does this work, then? Can you justify that the bonus is free without circularity?

From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (plus a small fee), wherease the ambiguity-averter's required spread is pushed up by the ambiguity (so all other agents will shop with the Bayesian).

Sure. There may be circularity concerns here as well though. Also, if one expects there to be a market for something, that should be accounted for. In the extreme case, I have no inherent use for cash, my utility consists entirely in the expected market.

Also, the ambiguity-averter has to keep track of more connected trades than a Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant; but trades that are truly independent of each other (in utility terms) can be treated so by a Bayesian but not by an ambiguity-averter.

I also gave the example of risk-aversion though. If trades pay in cash, risk-averse Bayesians can't totally separate them either. But generally I won't dispute that the ideal use of this method is more complex than the ideal Bayesian reasoner.

Comment author: endoself 08 February 2012 05:28:14AM 0 points [-]

My guess is that's a limitation of two dimensions -- it'll handle updating on draws from the urn but not "internals" like that. But I'm guessing. (1/2 +- 1/6) seems like a reasonable prior interval for a structureless event.

Would a single ball that is either green or blue work?

0 is the obvious value, but any initial value is still dynamically consistent.

I agree that your decision procedure is consistent, not susceptible to Dutch books, etc.

I don't think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I'd say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean.

I don't think this is true. Whether or not you flip the coin, you have the same information about the number of green balls in the urn, so, while the total information is different, the part about the green balls is the same. In order to follow your decision algorithm while believing that probability is about incomplete information, you have to always use all your knowledge in decisions, even knowledge that, like the coin flip, is 'uncorrelated', if I can use that word for something that isn't being assigned a probability, with what you are betting on. This is consistent with the letter of what I wrote, but I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.

Comment author: fool 10 February 2012 02:55:21AM 1 point [-]

Would a single ball that is either green or blue work?

That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (<5% vs >95%). Some of these cases should be candidates for very high (but not complete) ambiguity.

I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.

I think you're really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind. I'd say the former is your key proposition. It would be sufficient to rule out that an agent's internal variables, like the virtual interval, could have any effect. I'd say the metaphysical status of probability is a red herring (but I would also accept a 50:50 green-blue herring).

Of course even for Bayesians there are equiprobable options, so decisions can't be entirely a function of probability. More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.

Comment author: endoself 06 February 2012 05:03:42AM 0 points [-]

Huh, my explanations in that last post were really bad. I may have used a level of detail calibrated for simpler points, or I may have just not given enough thought to my level of detail in the first place.

you would think it's excessive to trade (20U,0U) for just 1U.

What bet did you have in mind that was worth (20U,0) ? One of the simplest examples, if P(green) = 1/3 +- 1/9, would be 70U if green, -20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)

What if I told you that the balls were either all green or all blue? Would you regard that as (20U,0U) (that was basically the bet I was imagining but, on reflection, it is not obvious that you would assign it that expected utility)? Would you think it equivalent to the (20U,0U) bet you mentioned and not preferrable to 1U?

There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?

So looks like my options are:

A) choose urn 1 either way

B) choose urn 1 (i.e. green) if the coin comes up heads, choose urn 2 if the coin comes up tails

C) choose urn 2 if the coin comes up heads, choose urn 1 (i.e. blue) if the coin comes up tails

D) choose urn 2 either way

And to be pedantic: E) flip my own coin to randomise between options B and C.

I am indifferent between A, D, and E, which I prefer to B or C.

So in the standard Ellisberg paradox, you wouldn't act nonbayesianally if you were told "The reason I'm asking you to choose between red and green rather than red and blue is because of a coin flip.", but you'd still prefer red if all three options were allowed? I guess that is at least consistent.

What if they were in the care of her future self who already flipped the coin? Why is this different?

This I don't understand. She is her future self isn't she?

This is getting at a similar idea as the last one. What seems like the same option, like green or Irina, becomes more valuable when there is an interval due to a random event, even though the random event has already occurred and the result is now known with certainty. This seems to be going against the whole idea of probability being about mental states; even though the uncertainty has been resolved, its status as 'random' still matters.

Comment author: fool 08 February 2012 02:53:23AM *  1 point [-]

What if I told you that the balls were either all green or all blue?

Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that's a limitation of two dimensions -- it'll handle updating on draws from the urn but not "internals" like that. But I'm guessing. (1/2 +- 1/6) seems like a reasonable prior interval for a structureless event.

So in the standard Ellisberg paradox, you wouldn't act nonbayesianally if you were told "The reason I'm asking you to choose between red and green rather than red and blue is because of a coin flip."

If I take the statement at face value, sure.

but you'd still prefer red if all three options were allowed?

Yes, but again I could flip a coin to decide between green and blue then.

This seems to be going against the whole idea of probability being about mental states;

Well, okay. I don't think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I'd say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean. However these mental states still leave the correct course of action underdetermined, and the virtual interval represents one degree of freedom. There is no rule for selecting the prior virtual interval. 0 is the obvious value, but any initial value is still dynamically consistent.

Comment author: Stuart_Armstrong 06 February 2012 01:15:14PM 0 points [-]

Instead of using a prior probability for events, can we not use an interval of probabilities?

Intervals of probability seem to reduce to probability if you consider the origin of the interval. Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities - so you seem to posit some distinction based on how something was setup. Where do you draw the line; when does something become genuinely ambiguous?

The boots and the mother example can all be dealt with using standard Bayesian techniques (you take utility over worlds, and worlds with one boot are not very valuable, worlds with two are; and the memories of the kids are relevant to their happiness), and you can re-express what is intuitively an "interval of probability" as a Bayesian behaviour over multiple, non-independent bets.

To be clear: you mean that my choices somehow cost utility, even if they're consistent?

You would pay to remove ambiguity. And ambiguity removal doesn't increase expected utility, so Bayesian agents would outperform you in situations where some agents had ambiguity-reducing knowledge.

Comment author: fool 08 February 2012 02:06:15AM *  0 points [-]

Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities

Correct.

Where do you draw the line

1) I have no reason to think A is more likely than B and I have no reason to think B is more likely than A

2) I have good reason to think A is as likely as B.

These are different of course. I argue the difference matters.

The boots and the mother example can all be dealt with using standard Bayesian techniques

Correct. See last paragraph of the post.

You would pay to remove ambiguity. And ambiguity removal doesn't increase expected utility, so Bayesian agents would outperform you in situations where some agents had ambiguity-reducing knowledge.

If you mean something like: red has probability 1/3, and green has probability 1/3 "on average", then I dispute "on average" -- that is circular.

The advantage of a money pump or "Dutch book" argument is that you don't need such assumptions to show that the behaviour in question is suboptimal. (Un)fortunately there is a gap between Bayesian reasoning and what money pump arguments can justify.

(Incidentally, if you determine the contents of the urn by flipping a coin for each of the 60 balls to determine whether it is green or blue, then this matters to the Bayesian too -- this gives you the binomial prior, whereas I think most Bayesians would want to use the uniform prior by default. Doesn't affect the first draw, but it would affect multiple draws.)

Comment author: Stuart_Armstrong 06 February 2012 12:56:59PM 0 points [-]

Instead we have a spread: we buy bets at their low price and sell at their high price.

How does the central limit theorem apply here? If you have a lot of independent deals of the form 1/2 +- 1/4, say, then when confronted with a million such bets, their value is very close to 1/2. I don't see how your model deals with these kinds of situations - randomising between +- 1/4 and -+ 1/4, maybe? Seems clumsy.

Comment author: fool 08 February 2012 02:02:04AM 0 points [-]

If you mean repeated draws from the same urn, then they'd all have the same orientation. If you mean draws from different unrelated urns, then you'd need to add dimensions. It wouldn't converge the way I think you're suggesting.

Comment author: torekp 03 February 2012 02:10:15AM 0 points [-]

This oriented-probability-interval stuff does seem to perform as advertised. But I just want to point to another, in my opinion simpler, way to rationally refuse to play Savage-style expected utility games. The simple way contests the axioms dealing with preference, rather than those dealing with probability. If some options are incomparable, Savage's argument fails. (Of course if an agent is forced to choose between incomparable options, it will choose, but that doesn't mean it considers one of the options "better" in a straightforward way, nor that a classical utility function can be derived.)

What if Principle of Indifference-inspired probability theories can actually be made to work? Would you end your defiance of classical utility theory?

Upvoted, first and foremost for appendix B.

Comment author: fool 04 February 2012 07:04:07PM 1 point [-]

Here's an alternate interpretation of this method:

If two events have probability intervals that don't overlap, or they overlap but they have the same orientation and neither contains the other, then I'll say that one event is unambiguously more likely than the other. If two events have the exact same probability intervals (including orientation), then I'll say that are equally likely. Otherwise they are incomparable.

Under this interpretation, I claim that I do obey rule 2 (see prev post): if A is unambiguously more likely than B, then (A but not B) is unambiguously more likely than (B but not A), and conversely. I still obey rule 3: (A or B) is either unambiguously more likely than A, or they are equally likely. I also claim I still obey rule 4. Finally, I claim "unambiguously more likely" is transitive, but it is not total: there are incomparable events. So I break that part of rule 1.

Passing to utility, I'll also have "unambiguously better", "equally good", and "incomparable".

Of course if an agent is forced to choose between incomparable options, it will choose, but that doesn't mean it considers one of the options "better" in a straightforward way,

Exactly. But there's a major catch: unlike with equal choices, an agent cannot choose arbitrarily between incomparable choices. This is because incomparability is intransitive. If the agent doesn't resolve ambiguities coherently, it can get money pumped. For instance, an 18U bet on green and a 15U bet on red are incomparable. Say it picks red. A 15U bet on green and a 15U bet on red are also incomparable. Say it picks green. But then, an 18U bet on green is unambiguously better than a 15U bet on green.

The rest of the post is then about one method to resolve imcomparability coherently.

I personally think this interpretation is more natural. I also think it will be even less palatable to most LW readers.

View more: Next