In "Principles of Disagreement," Eliezer Yudkowsky shared the following anecdote:

Nick Bostrom and I once took a taxi and split the fare.   When we counted the money we'd assembled to pay the driver, we found an extra twenty there.

"I'm pretty sure this twenty isn't mine," said Nick.

"I'd have been sure that it wasn't mine either," I said.

"You just take it," said Nick.

"No, you just take it," I said.

We looked at each other, and we knew what we had to do.

"To the best of your ability to say at this point, what would have been your initial probability that the bill was yours?" I said.

"Fifteen percent," said Nick.

"I would have said twenty percent," I said.

I have left off the ending to give everyone a chance to think about this problem for themselves. How would you have split the twenty? 

In general, EY and NB disagree about who deserves the twenty. EY believes that EY deserves it with probability p, while NB believes that EY deserves it with probability q. They decide to give EY a fraction of the twenty equal to f(p,q). What should the function f be?

In our example, p=1/5 and q=17/20

Please think about this problem a little before reading on, so that we do not miss out on any original solutions that you might have come up with.


I can think of 4 ways to solve this problem. I am attributing answers to the person who first proposed that dollar amount, but my reasoning might not reflect their reasoning.

  1. f=p/(1+p-q) or $11.43 (Eliezer Yodkowsky/Nick Bostrom) -- EY believes he deserves p of the money, while NB believes he deserves 1-q. They should therefore be given money in a ratio of p:1-q.
  2. f=(p+q)/2 or $10.50 (Marcello) -- It seems reasonable to assume that there is a 50% chance that EY reasoned properly and a 50% chance that NB reasoned properly, so we should take the average of the amounts of money that EY would get under these two assumptions.
  3. f=sqrt(pq)/(sqrt(pq)+sqrt((1-p)(1-q))) or $10.87 (GreedyAlgorithm) -- We want to chose an f so that log(f/(1-f)) is the average of log(p/(1-p)) and log(q/(1-q)). 
  4. f=pq/(pq+(1-p)(1-q)) or $11.72 -- We have two observations that EY deserves the money with probability p and probability q respectively. If we assume that these are two independent pieces of evidence as to whether or not EY should get the money, then starting with equal likelihood of each person deserving the money, we should do a Bayesian update for each piece of information.

I am very curious about this question, so if you have any opinions, please comment. I have some opinions on this problem, but to avoid biasing anyone, I will save them for the comments. I am actually more interested in the following question. I believe that the two will have the same answer, but if anyone disagrees, let me know.

I have two hypotheses, A and B. I assign probability p to A and probability q to B. I later find out that A and B are equivalent. I then update to assign the probability g(p,q) to both hypotheses. What should the function g be?

New Comment
66 comments, sorted by Click to highlight new comments since:
[-]gjm130

[EDITED substantially from initial state to fix some serious errors.]

If we try to do this properly with Bayes, here's how it goes.Your odds ratio Pr(N) : Pr(E) presumably starts at 1:1 since the initial situation is symmetrical. Then it needs to be multiplied by Pr(N says 15% | N) : Pr(N says 15% | E) and then by Pr(E says 20% | N) | Pr(E says 20% | E).

Continuing to assume that we have no outside knowledge that would distinguish N from E (in reality we might; they're quite different people; this might also change our prior odds ratio), we'd better have a single function giving Pr(say p | was yours) : Pr(say p | not yours), and then our posterior odds ratio is f(15%) / f(20%).

(And then Pr(E) is f(20%) / (f(15%) + f(20%)) and Eliezer should get that fraction of the $20.)

I don't see any strong grounds for singling out one choice of f as The Right One. At least for comparably "round" choices of p, f should be increasing. It shouldn't go all the way to 0:1 at p=0 or 1:0 at p=1, but for people as clueful about probabilities as E and N it should be pretty close at those extremes.

But, still, where does f come from? Presumably you have some idea of how carefully you counted your money (and some idea of how the other guy counted, which complicates matters a bit more), and the more carefully you counted (and weren't aware that the $20 was yours) the more likely you are to give a low value of p, and also the more likely it is that the money really isn't yours.

Toy model #1: there's a continuum of counting procedures, each of which puts in an extra $20 with probability q for some 0 <= q <= 1, and you know in hindsight which you employed, and you give that value of q as your estimate. But that isn't actually the right value of q; the right value depends on what the other person is expected to have done. We should think better of both N and E than to model them this way.

If you do this, though, what happens is that if the prior probability of error for each party is g (for "goof") then f(q) = q(1-g) : (1-q)g, and then f(q1) / f(q2) = q1(1-q2) : (1-q1)q2. This is the same as option 4 in the article.

Toy model #2: there's a continuum of counting procedures, as before, and which you use is chosen according to some pdf, the same pdf for both parties, and you know which procedure you used and what the pdf is. Then your Pr($20 is mine | q) is Pr(I goofed and the other guy didn't | q, exactly one $20 extra) = q Pr(other guy didn't goof) / (q Pr(other guy didn't goof) + (1-q) Pr(other guy goofed)) so if the overall goof probability is g (this is all you need to know; the whole pdf is unnecessary) then for a given value of q, the probability you quote will be q(1-g) / [q(1-g)+(1-q)g]. Which means (scribble, scribble) that if you quote a probability p then your q is (necessarily, exactly) gp / ((1-g)-(1-2g)p). Which means, unless I'm confused which I might be because it's after 1am local time and I know I already made one mistake in the first version of this comment, that when you say p1 and he says p2 the odds ratio is gp1 / ((1-g)-(1-2g)p1) : gp2 / ((1-g)-(1-2g)p2) = p1 ((1-g)-(1-2g)p2) : p2 ((1-g)-(1-2g)p1).

This depends on g, which is an unknown parameter. When g->0 we recover the same answer as in toy model #1. (When errors are very rare, the probability that the $20 is yours is basically the same as the probability that you goofed.) When g->1 the posterior odds approach 1:1. (When non-errors are very rare, Pr(N goofed) and Pr(E goofed) are both very close to 1, so their ratio is very close to 1, so once we know there was exactly one error it's about equally likely to be either's.) As g increases, the odds ratio becomes monotonically less unequal.

I don't see any obvious principled way of estimating g, beyond the trivial observations that it shouldn't be too small since an error happened in this case and it can't be too large since N and E were both surprised by it.

If, like Coscott, you feel that when p2=1-p1 (i.e., the two parties agree on the probability that the money belongs to either of them) the posterior odds should be the same as you'd get from either (i.e., you should divide the money in the "obvious" proportions), then this is achieved for g=1/2 and for no other choice of g. For g=1/2, the posterior odds are simply p1 : p2. That's the original Yudkowsky solution, #1 in the article, with which shminux agrees in comments here.

If, IIRC like someone in the original discussion, you feel that replacing p1,p2 with 1-p1,1-p2 should have the same effect as replacing them with p2,p1 -- i.e., if one party says "20% N" and the other says "15% E" it doesn't matter which is which -- then you must take either g=0 (reproducing answer 4) or g=1 (always splitting equally).

What's wrong with toy model #2, aside from that annoying free parameter? A few things. Firstly, in reality people tend to be overconfident. (Maybe not smart bias-aware probability-fluent people talking explicitly about probabilities, but I wouldn't bet on it.) This amounts to saying that we should move q1 and q2 towards 50% somewhat before doing the calculation, which will make the posterior odds less unequal. Exactly how much is anyone's guess; it depends on our guess of the calibration curves for people like N and E in situations like this.

Secondly, you won't really know your own procedure's q exactly. You'll have some uncertain estimate of it. If my scribblings are right then symmetrical uncertainty in q doesn't actually change the posterior odds. Your uncertainty about q won't really be symmetrical, even after accounting for miscalibration -- e.g., if you think q=0.01 then you might be 0.02 too low but not 0.02 too high -- but for moderate values like 0.15 or 0.20 symmetry's probably a harmless assumption.

Thirdly, whatever your "internal" estimate of q it'll get rounded somewhat, hence the nice round 15% and 20% figures N and E gave. This is probably also a fairly symmetrical affair for probabilities in this range, so again it probably doesn't make much difference to the posterior odds.

On the basis of all of which, I'll say:

  • Answer #4 in the article seems like an upper bound on how unequally the money can reasonably be distributed. If you either adopt the overoptimistic model 1, or take g to be tiny in model 2, and don't allow for any overconfidence, then you get this answer.
  • There doesn't seem to be any very obvious way to choose that free parameter g.
    • If you take the two parties' estimates of the probability that they goofed as indicative and take g=(p1+p2)/2, which really isn't a very principled choice but never mind, then in this case you get a division $20 = $8.35 + $11.65, just barely less unequal than the g=0 solution.
    • If you take g=0 you get answer #4 again. This is your only choice if you want only the two probability assignments to matter, and not who offers which.
    • If you take g=1/2 you get answer #1. This is your only choice if you want to divide the money in the ratio p:1-p when both parties agree that Pr(money is N's) = p.
    • If you take g=1 you get equal division regardless of quoted probabilities.
  • Marcello's answer (#2 in the article) is always less unequal than #4 and is nice and easy to calculate. It might be a good practical compromise for those not quite pragmatic enough to adopt the obvious "meh" solution I proposed elsewhere in comments.

Then it needs to be multiplied by Pr(N says 15% | N) : Pr(N says 20% | E) and then by Pr(E says 20% | N) | Pr(E says 20% | E).

Was that second Pr() meant to be "Pr(N says 15% | E)"?

[-]gjm20

Yup. Will fix. Thanks! [EDITED: now fixed; thanks again.]

It shouldn't go all the way to 0:1 at p=0 or 1:0 at p=1

I am surprised you feel this way. If EY says, "Oh, I put that extra 20 in there," and NB says he has no idea, then I think EY should get the 20 back.

[-]gjm10

Pragmatically yes, but if E is foolish enough to say it's his with probability 1 there's really still some chance that actually it's N's. (Suppose E says "yeah, I put that in"and N replies "Huh? I was 99% sure I put in an extra twenty".)

E does also happen to be the author of this post. :-)

[-]gjm20

Quite so. (I wondered about linking to it from the word "foolish" but decided it wasn't necessary.)

I wish to disclaim my prior reply as long-ago and made-up-on-the-spot.

I wish to disclaim my prior reply as long-ago and made-up-on-the-spot.

Do you have a new answer?

[-][anonymous]120

When I got to the end of the anecdote, I immediately assumed that Eliezer ended up taking the entire twenty dollars; it wasn't until I read a little further that I realized we were discussing other options. I can justify this split by saying that since Eliezer appears to be more deserving of the twenty dollars, there's no reason for Nick to receive a penny of it.

Another possible line of reasoning: there's an extra twenty dollars there, and two people whom it would be reasonable to give it to, each of them perfectly willing to let the other have it, and so the money should go to whoever wants it more (or perhaps whoever's less well off to begin with, or whoever's younger).

And another: Eliezer is more deserving of it, and so we should prefer to give it to Eliezer, but Nick's expected amount-of-money-that-was-his is 0.15 * $20, so we should give Eliezer as much as possible while still giving Nick at least that amount, i.e. Eliezer should get 85% and Nick should get 15%.

Or just give it to the driver.

I don't see any reason that the probability-assigning function g would be identical to the money-assigning function f, because the amount of money Eliezer should get is not necessarily proportional to the probability that the twenty originally belonged to Eliezer. If we pretend that the algorithms given in the article are probability-assigning algorithms, then 4 looks the most compelling, 1 also looks reasonable, I don't really understand 3, and 2 looks problematic (if Nick had said 15% and Eliezer had said 0%, then the correct adjustment seems to be to say that it's Nick's, with 100% probability).

Or just give it to the driver.

Usually I'd say just donate it to charity. But which one?

Giving the extra to the driver doesn't work if p>q, because the sum of the values each person believes they are entitled will be more than $20.00

[-][anonymous]50

Unless the driver is very nice.

if Nick had said 15% and Eliezer had said 0%

Then either the money was Nick's or Eliezer was lying. And if either of them was lying, no juggling with the numbers they said would make much sense.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]10

Then either the money was Nick's or Eliezer was lying.

Yes, I'm saying that the money is Nick's in that case.

I misremembered which one 2 was (I thought it was the one that's actually 4), and I thought you were talking about something it did but you thought was wrong to do.

(WTH is happening to my reading comprehension skills lately?)

[-]gjm100

My preferred solution: "Meh, 15% and 20% are pretty close. Call it $10 each?" "OK." Done.

While this is what would normally happen in this situation, I don't think that answers or takes away from the deeper question of what is the most fair way to distribute the money.

[-]gjm30

Of course I agree.

[-]Shmi90

I interpreted the story as EY says "20% odds it's mine" and NB says "15% odds it's mine", so the first approximation is to renormalize the odds to add to 100% and split the bill as 20/(20+15)= and 15/(20+15) respectively. Anything more involved requires extra information and progressively involved assumptions. For example, what can you conclude about the calibration level of each one? Did NB actually mean 1/7 when he said 15%? Is EY prone to over/under-estimating the reliability of his memory? And practical questions: how cumbersome would it be to split the $20 given the change available?

So the real exchange probably went something like this: "Here, I'll take the twenty and give you a ten and a dollar bill... Oh, here is a couple of quarters, too". "Keep the quarters, I hate coins." 'Done."

Your concluding question does not seem to be relevant, and the calculation depends heavily on how you assigned p and q to begin with. Was there shared evidence? What other alternatives were considered? I can easily imagine that in some circumstances the combined probability could be lower than both p and q, because both heavily rely on the same piece of evidence and not seeing that the two hypotheses were equivalent weakens the value of this evidence (what else did you miss?).

I fully agree that you could have more information that tells you how to combine the probabilities, but we don't always have that information, and we need to make a decision anyway. Maybe this means the problem does not have a definitive answer, but I am still trying to decide what I would do.

[-][anonymous]00

If I'm the person who came up with probabilities p and q in the first place, surely I know how I came up with those probabilities.

This is valid. However, for the reasons I am interested in the problem (which I am not going to describe now) I don't get any to consider those probabilities. Pretend that I am stubborn, and refuse to consider anything other than p and q, but still would like advice on how to combine them.

I'm surprised nobody has yet written that the appropriate way for them to split it in this case is $10 each, because the transaction cost of working out something else in more detail and then making the appropriate change is greater than the difference between $10 and whatever the appropriate answer is.

Yes, but here the goal is to solve the general case.

I suspect that the problem of trusting system 1 is more general than the problem of perfectly analyzing system 2 (as a citation: the fact that humans use system 1 reasoning almost all the time).

I agree that the system 2 answer to this question is also interesting, and my first answer was the bayesian answer which I believe was 3rd on the OP.

I stand by the fact that the real world answer to THIS problem is decided by contingent environmental circumstances, and that the real answer to any similar but scaled-up real world problem will also probably be decided by contingent environmental circumstances. I don't resent people answering in a technical way I was more just surprised that no one else had written what I wrote.

Another relevant figure is EY's estimate that the bill was Nick's, and vice versa. We do not yet have enough data to solve the problem. If EY said that he also had a 20% probability that the bill was Nick's and Nick had a 15% probability that the bill was EY's, they should split it 50/50.

Also, if Nick had said it was 1% chance it was his and EY had said it was 3% it was his (they had both checked their wallets earlier, say), would it make sense to split it 3:1 in favor of EY, or anything close to that? I figure that in that case the probability that it was already in the cab and they should split it 50/50 as a windfall begins to dominate. Such a notion does not seem to have entered any of the above calculations.

As promised, here is my analysis.

First of all, I think that the g and f functions should be the same. My reason is that to be completely satisfied with who gets the 20, both people should update their probabilities to the same value, so f(p,q) seems like the probability that EY and NB should both walk away with saying is the probability that EY deserved the 20. Since EY and NB trust each other, this should be the same if there is a single person with both beliefs learning they are equivalent. Some of the features I want in f are easier to justify in the g example, which might have caused some of my mistakes.

I think this question is inherently about how much we treat p and q as coming from independent sources of information. If we say that the sources are independent, then #4 is the only reasonable answer. However, the dependency of the evidence is not known.

I think f should have the following properties:

A: f(p,p)=p -- If we wanted f to be a general rule for how to take two probabilities and output a probability that is an agreement of the two, then there is a danger in setting g(p,p) to anything other than p. In that we can repeatedly apply our rule and get different answers. We update p and q to g(p,q) and g(p,q), but then these are our new probabilities, so we will update them both to g(g(p,q),g(p,q)). Our new answer is not consistent under reflection. Therefore, I think g(p,p) should be p, and this makes me believe that f(p,p) should also be p. (Maybe I am working by analogy, but I still believe this.)

B: f(p,q)=1-f(1-q,1-p) -- This is just saying that the answer is symmetric with respect to swapping EY and NB.

C: f(p,q)=f(q,p) -- This is saying that the answer is symmetric with respect to swapping who says what. This property seems really necessary in the g problem.

D: f(1,q)=1 -- If EY knows he put the 20 in, he should get it back. In the g problem, if A is a theorem, and we learn B if and only if A, then we can prove B also.

E: p1>=p2, then f(p1,q)>=f(p2,q) -- If EY is more sure he should get the money, he shouldn't get less money.

F: f(f(p,q),f(r,s))=f(f(p,r),f(q,s)) -- This relation doesn't mean much for f, but for g, it is saying that the order in which we learn conjectures are equivalent shouldn't change the final answer.

G: f is continuous -- A small change in probability shouldn't have a huge effect on the decision.

I think that most people would agree with B, C, and E, but looking at the comments, A and D are more controversial. F doesn't make any sense unless you believe that f=g. I am not sure how people will feel about G. Notice that if B, C, and D together imply that f(1,0) is not defined, because it would have to be both 1 and 0, I think this is okay. You can hardly expect EY and NB to continue trusting each other after something like this, but it is necessary to say to be mathematically correct.

Now, to critique the proposed solutions.

1 Violates C and D (I didn't check F)

2 Violates D

4 Violates A

3 Does not violate any of my features.

I did not do a good job of describing where #3 came from, so let me do better. #3 chooses the unique value f, such that if we wanted to update EYs probability to f and update NBs probability to f, this would take the same amount of evidence. It satisfies A because if they are both already the same, then it doesn't take any evidence. It satisfies D, because no finite amount of evidence can bring you back from certainty.

We do not have enough information to say that #3 is the unique solution. If we were to try to, it would look like roughly like this:

If we think about the problem by looking at p =log(p/1-p). Then #3 just finds the f =(p +q )/2. E and A together tell us that f should be somewhere between p and q but it is not immediately clear that the arithmetic mean is the best compromise. However, I believe this implies that we can apply some monotone function h, such that h(f ) is the always the arithmetic mean of h(p ) and h(q ). B tells us that this monotone function must be an odd function (h(-x)=-h(x)).

From here, #3 assumes that h(x)=x, but if we were to take h(x)=x^3 for example, we would still meet all of our properties. We have freedom from the fact that we can weigh probabilities with different distances from 1/2 differently.

Now that I understand 4 (thanks again for the explanation!), this seems to be the key:

I think this question is inherently about how much we treat p and q as coming from independent sources of information. If we say that the sources are independent, then #4 is the only reasonable answer. However, the dependency of the evidence is not known.

I'm not sure that it makes sense for the general rule to do anything other than make an estimate of the amount of dependence in the evidence and update accordingly. You would of course need some kind of prior for that, but a Bayesian already needs a prior for everything anyway. Does that approach seem problematic for reasons I'm not thinking of?

Because you have two people, they might disagree on how independent their sources are, and further disagree on how independent their sources which told them their sources were independent are. This infinite regress must be stopped at some point, since they don't have infinite time to compare all of their notes, and when it stops, the original question must be answered.

For me though, I understand that we cant do Bayesian updates completely rigorously unless we account for all the information, but since we are not perfect Bayesianists, I think the question of how well we can do on a first approximation is an important one.

How about if the two do probability updates, Aumann agreement style, until their estimates agree? (Maybe this is equivalent to your method 3; I don't recall how the math works out.)

I think to apply Aumann, you have to assume that both people have consistent probability distributions, which I think is an unreasonable assumption. People are not perfect Bayesianists.

[-][anonymous]30

The results of Aumanning can't be determined from just the initial probabilities. For example, suppose that Nick knows for a fact that an undetectable ninja rolled a 20-sided die and gave the twenty to Nick with probability 15%, and Nick also knows that neither Nick nor Eliezer has made any observations that would help them determine which person the ninja gave the money to. Eliezer, on the other hand, just made a wild guess. Nick will keep saying 15% no matter what, so their estimates can't converge to anything other than 15%.

So Aumanning can't be equivalent to any choice of either f or g.

EY believes he deserves the money with probability p.

NB believes he deserves the money with probability q.

The following rules would, I think, be considered fair by most people:

  • If p=q, f(p,q) should be 1/2.

  • If p=0, f(p,q) should be 0 (EY gets nothing).

  • If q=0, f(p,q) should be 1 (EY gets everything).

The simplest rational function obeying these conditions is f(p,q) = p/(p+q)

EY believes that EY deserves it with probability p, while NB believes that EY deserves it with probability q.

You are using the same notation the OP used to mean something different!

Really?

EY believes that EY deserves it with probability p, while NB believes that EY deserves it with probability q

Oh! You're right. The OP's q is 1-(my q), so this solution reduces to option (1) in the OP.

For 4 could you explain where that formula comes from? I thought I understood how to apply Bayes' theorem, but I'm getting stuck on what P(B) or P(B|A) should be (where A is: $20 is Eliezer's, and B is: Nick says 85%).

Sure! You start with a likelihood ratio of 1:1 of the 20 belonging to EY or NB. EY says that it is his with probability 20%. This is evidence with a ratio of 20:80 or 1:4. After you update to account for EYs claim, you assign a 20% chance that it belongs to EY. Then, you have NBs claim, which is evidence in the ratio of 85:15 or 17:3. You multiply the 1 by 17 and the 4 by 3 to get 17:12. This means that EY should get 17/29 of the money, which is $11.72.

This is using the notation of likelihood ratios, which is easier to work with. Trying to attack it with Bayes Theorem directly is a more confusing. The reason is because we are trusting EY and NBs claims as evidence, without actually specifying some event that caused that evidence. A good way to think about it is that we start by taking EY into account and thinking that the probabiltiy of A is 20%. Then we say that NB has a sensor that tells him whether or not the 20 is his and gets it right 85% of the time. We let B be the event that NBs sensor tells him that the 20 belongs to EY. Then, P(B|A)=85%, and B(B)=85% of 20% plus 15% of 80%, which is 29%. Therefore, we get P(A|B)=P(A)P(B|A)/P(B)=20%85%/(29%)=17/29.

Does that make sense?

Yes, that does make sense. The problem was that I was thinking of B as the event that Nick says a certain percentage, rather than the event that he says the bill is Eliezer's with the percentage being the probability that he's right. Thanks!

Should

Step 1: Before anything else, let's get the normative question out of the way. What exactly is the goal here...i.e, what are we maximizing for?

I think you can ask the same question about any fair division question, and I really don't know how to answer that. Perhaps is the answer is that we are optimizing for how fair our morality says the division is.

our morality

If you are asking humanity, give the money to charity.

If I was one of the two people involved, I'd suggest using the 20 dollars to buy us both dinner (or other jointly enjoyed purchase...charity still works). The social trick leaves neither indebted to the other - in fact it can strengthen the bond.

You can't come up with a general solution to division problems, because human morality doesn't work that way. We've got really idiosyncratic notions of fairness, especially in cultures which have property rights.

"This bill clearly belongs to exactly one of the two us; I think it is 80% likely to be yours and 20% likely to be mine, and you think it is 85% likely to be mine and 15% likely to be yours. In order to fairly divide it, we weigh our beliefs equally (by symmetry), and divide it according to the ratio 80+15:20+85 you:me; that's 95:105, or 47.5:52.5. That's a 52.5% chance that I get the bill, and a 47.5% chance that you get it." (Same expected values as 2, from equivalent math)

I don't think there's a general case for the function g. Consider the case where hypotheses B, C, and D are mutually exclusive. Proposition A is equivalent to "not C" but B is only known to be mutually exclusive with C. On discovering that D is false, A becomes equivalent to B, but there is also potentially new information about C.

Variant 1: In a departure from Monty Hall; I put a bean under one of the cups B,C,D after rolling a d10. (Time a) Then I show you what is under cup D. There is not a bean. (Time b)

If the odds ratio at time a is 2:4:4, then the odds at time b are 2:4:0. Proposition A went from 6/10 likely to 1/3 likely, while B went from 2/10 to 1/3.

Variant 2: I roll the die; if the result is prime, I put the bean under cup d. Otherwise, if the result is greater than five, I put the bean under cup c. Otherwise I put the bean under cup b. At time a, the odds are 2:4:4. Then I tell you that the number I rolled is a perfect square. You should update to 2:1:0

In both cases, you updated that B and not-C were equivalent, but in the latter case you gained new information about B and C. In the general case, if there is no new information about B and C, then the ratio B:C should remain constant; I intuit that that will probably mean a fairly simple transformation.

My intuition is #1, but a variant on #2 would be to use the geometric mean instead - so f = sqrt(pq). This has the desirable (but common) features that:

p=q => f=p

But unfortunately

p = 1, q = 0 => f=0

The geometric mean gives 0 an advantage over 1, which makes it not symmetric in general. If we trust EY and NB the same, then f(p,1-q) should equal 1-f(q,1-p) (in both cases, one person thinks they get p of the dollar and the other thinks they get q.) Your variant of 2 does not satisfy this, which I think is the feature I think is the most important.

[-][anonymous]00

Are either of them well calibrated enough for the difference between 15% and 20% to be meaningful?

[This comment is no longer endorsed by its author]Reply

I'm GreedyAlgorithm, and here is some more discussion: https://rhollerith3.jottit.com/

Hal Finney's answer here which Richard Hollerith seems to endorse is #4.

Reading that, I believe his reason for endorsing #4 is that he made the assumption that neither of their claims relies on any evidence relied on by the other. If we assume this, I agree that #4 is the right answer, but I do not think that is a valid assumption.

Reposting:

I got the same answer as Marcello by assuming that each of them should get the same expected utility out of the split.

Say that Nick keeps x and Eliezer keep y. Then the expected utility for Nick is

0.85 x − 0.15 ($20 − x),

while the expected utility for Eliezer is

0.8 y − 0.2 ($20 − y).

Setting these equal to each other, and using x + y = $20, yields that Nick should keep x = $9.50, leaving y = $10.50 for Eliezer.

Do you stand behind that decision, even though it does not give the entire 20 to EY when he is 100% sure he deserves it?

I'm not sure what you mean. The algorithm I described gives the same formula f=(p+q)/2 as Marcello gave, although I arrived at it with a different justification. According to that formula, if EY and NB are both 100% sure that the bill was EY's, then EY gets the entire $20.

If each of them is 100% sure that the bill is his own, then they split the bill 50-50.

If EY is 100% sure that the bill is his, while NB thinks that there is a significant chance that the bill is his (e.g., 50%), then EY doesn't get the entire $20. Do you see a compelling reason why he should?

I meant that if EY is 100% sure, while NB is 50% that EY should get the entire 20. I don't think that I see anything you don't on this. To me, it seems that if EY knows something, and NB trusts him, that NB should update to know it too, but it looks like you disagree. Perhaps I am working by analogy. I think that the equivalent property for the g function is more clearly true. If A is a theorem, and B is a conjecture, and we prove they are equivalent, then B as a theorem as well.

However, it at least seems that EY should get close to all of it. If there is 60 in the pot If NB just threw money in and has no idea whether he put 20 or 40 in, while EY knows (99.99%) that he put exactly 40 into the pot, EY should get a lot more than 15$

It's very hard to say what will happen if they are each going to update based on each other's probabilities.

Aumann's theorem doesn't apply directly, because they do not have common knowledge of their posteriors, even after they exchange probabilities. For, each will know that the other will have updated, but he won't know what the other's new posterior is. It's not clear to me that their probabilities will begin to converge even after they go through many iterations of exchanging posteriors. If their probabilities converge, then that convergence value will depend subtlely on what each knows about the other, what each knows that the other knows about him, and so on ad infinitum.

Nonetheless, you're right about what will happen if EY starts out 100% confident (which he never would). In that case, no matter what, if their posteriors converge, then they would have to converge on 100% certainty that the money belongs to Eliezer. If EY starts out 100% confident, no amount of confidence on NB's part couldn't ever make him budge from that absolute certainty. I'm not sure what conditions would guarantee that their probabilities would converge. (They certainly won't if NB starts out 100% certain that the money is his.) But, if they could somehow establish that their probabilities would converge, then, yes, they may as well give all the money to EY.

But, in general, I don't know how to analyze the problem if you allow them to update based on each other's posteriors. I don't know how they could determine whether their posteriors will converge, nor what that value of convergence might be.

If they aren't allowed to update, if the 20$ must be apportioned based on their initial probabilities, then Marcello's f=(p+q)/2 formula seems to me to be the best way to go.

Method 3 chooses the unique f such that updating p to f and updating q to f require the same amount of information.

Before reading GreedyAlgorithm's post, I decided independently that I support method 3, although I may find another answer I like better. Methods 1 and 2 I do not like, because if you let p=1 and q=1/2, they do not give all the money to EY. Method 4 I do not like, because I think that f(p,p) should equal p. However, I have no argument for 3 other than the fact that it feels right, and it meets all the criteria I can think of in degenerate cases.

Method 4 I do not like, because I think that f(p,p) should equal p.

Why? If Eliezer and Nick independently give 60% probability to the money being Eliezer's, my posterior probability estimate for that would be higher than that. (OTOH, there's the question of how independent their estimates actually are.)

There is a question of how independent their estimates are, and I think that the algorithm should be consistent under being repeatedly applied. If EY and NB update their probabilities to the same thing, and then try to update again, their estimates should not change.

In my opinion, the question should not about how to apply the Aumann agreement theorem, but how to compromise. That is the spirit of #2 and #3. They attempt to find the average value. (The difference is that one measures thinks that the scale that should be used is p, and the other thinks it is log(p/(1-p)).)

I do not think that this question has a unique solution. Probability doesn't give us an answer. We are trying to determine what is fair. I think that my position is that the fair thing to do is to follow the result of the g question. The g question tells us how to combine probabilities without information of how independent they are if the goals and beliefs belong to a single person. If we have two people who trust each other and do not want more than their share, then they should adopt the same probability as if they were one person.

For the question on the g function, it is not about what is fair, and instead about what is safe. If I am going to prescribe a general rule on how to combine these estimates that does not know how independent they are, I want it to be consistent under repeated application so I don't send all of my probabilities off to 1 when I shouldn't.

There's something really off with the formatting here - some of the paragraphs have overlapping lines

Thanks. It was never off on my browser, but I think I may have fixed it. Am I right?

Yep, fixed!

My first reaction to the second question is to consider the case in which p + q = 1. Then, the answer is clearly that g(p, q) = p + q. I suspect that this is incomplete, and that further relevant information needs to be specified for the answer to be well-defined.

[This comment is no longer endorsed by its author]Reply

I think that when p+q=1, the answer is clearly 1/2 due to symmetry. How did you get p+q?

If p + q = 1, then p(A or B) = 1. The equivalence statement about A and B that we're updating can be stated as (A or B) iff (A and B). Since probability mass is conserved, it has to go somewhere, and everything but A and B have probability 0, it has to go to the only remaining proposition, which is g(p, q), resulting in g(p, q) = 1. Stating this as p+q was an attempt to find something from which to further generalize.

[This comment is no longer endorsed by its author]Reply

Oh, I just noticed the problem. When you say p(A or B)=1, that assumes that A and B are disjoint, or equivalently that p(A and B)=0.

The theorem you are trying to use when you say p(A or B)=1 is actually:

p(A or B)=p(A)+p(B)-p(A and B)

Ok, this is a definition discrepancy. The or that I'm using is (A or B) <-> not( (not A) and (not B)).

Edit: I was wrong for a different reason.

I think that either I have communicated badly, or you are making a big math mistake. (or both)

Say we believe A with probability p and B with probability 1-p. (We therefore believe not A with probability 1-p and not B with probability p.

You claim that if we learn A and B are equivalent then we should assign probability 1 to A. However, a symmetric argument says that we should also assign probability 1 to not A. (Since not A and not B are equivalent and we assigned probabilities adding up to 1.)

This is a contradiction.

Is that clear?

Yes. Woops.