After being told whether they are deciders or not, 9 people will correctly infer the outcome of the coin flip, and 1 person will have been misled and will guess incorrectly. So far so good. The problem is that there is a 50% chance that the one person who is wrong is going to be put in charge of the decision. So even though I have a 90% chance of guessing the state of the coin, the structure of the game prevents me from ever having more than a 50% chance of the better payoff.
eta: Since I know my attempt to choose the better payoff will be thwarted 50% of the time, the statement "saying 'yea' gives 0.9*1000 + 0.1*100 = 910 expected donation" isn't true.
It is an anthropic problem. Agents who don't get to make decisions by definition don't really exist in the ontology of decision theory. As a decision theoretic agent being told you are not the decider is equivalent to dying.
It is a nice feature that Psy-kosh's problem that it pumps the confusing intuitions we see in scenarios like the Sleep Beauty problem without recourse to memory erasing drugs or teleporters-- I think it tells us something important about this class of problem. But mathematically the problem is equivalent to one where the coin-flip doesn't make nine people deciders but copies you nine times- I don't think there is a good justification for labeling these problems differently.
The interesting question is what this example tells us about the nature of this class of problem- and I'm having trouble putting my finger on just what that is.
I tell you that you're a decider [... so] the coin is 90% likely to have come up tails.
Yes, but
So saying "yea" gives 0.9 1000 + 0.1 100 = 910 expected donation.
... is wrong: you only get 1000 if everybody else chose "yea". The calculation of expected utility when tails come up has to be more complex than that.
Let's take a detour through a simpler coordination problem: There are 10 deciders with no means of communication. I announce I will give $1000 to a Good and Worth Charity for each decider that chose "yea", except...
Here the optimal strategy is to choose "yea" with a certain probability p, which I don't have time to calculate right now
The expected value is $1000 (10 * p - 10 p^ 10). Maximums and minimums of functions may occur when the derivative is zero, or at boundaries.
The derivative is $1000(10 - 100 p^ 9). This is zero when p = 0.1^(1/9) ~= 0.774. The boundaries of 0 and 1 are minima, and this is a maximum.
EDIT: Huh. This simple calculation that mildly adds to the parent is worth more karma than the parent? I thought the parent really got to the heart of things with: "(because there's no reliable way to account for the decisions of others if they depend on yours)" Of course, TDT and UDT are attempts to do just that in some circumstances.
I can't formalize my response, so here's an intuition dump:
It seemed to me that a crucial aspect of the 1/3 solution to the sleeping beauty problem was that for a given credence, any payoffs based on hypothetical decisions involving said credence scaled linearly with the number of instances making the decision. In terms of utility, the "correct" probability for sleeping beauty would be 1/3 if her decisions were rewarded independently, 1/2 if her (presumably deterministic) decisions were rewarded in aggregate.
The 1/2 situation is mirrored here: Th...
The first is correct. If you expect all 10 participants to act the same you should not distinguish between the cases when you yourself are the sole decider and when one of the others is the sole decider. Your being you should have no special relevance. Since you are a pre-existing human with a defined identity this is highly counterintuitive, but this problem really is no different from this one: An urn with 10 balls in different colors, someone tosses a coin and draws 1 ball if it comes up head and 9 balls if it comes up tails, and in either case calls o...
I claim that the first is correct.
Reasoning: the Bayesian update is correct, but the computation of expected benefit is incomplete. Among all universes, deciders are "group" deciders nine times as often as they are "individual" deciders. Thus, while being a decider indicates you are more likely to be in a tails-universe, the decision of a group decider is 1/9th as important as the decision of an individual decider.
That is to say, your update should shift probability weight toward you being a group decider, but you should recognize that ...
Do you have in mind something like 0.9 1000/9 + 0.1 100/1 = 110? This doesn't look right
This can be justified by change of rules: deciders get their part of total sum (to donate it of course). Then expected personal gain before:
for "yea": 0.5*(0.9*1000/9+0.1*0)+0.5*(0.9*0+0.1*100/1)=55
for "nay": 0.5*(0.9*700/9+0.1*0)+0.5*(0.9*0+0.1*700/1)=70
Expected personal gain for decider:
for "yea": 0.9*1000/9+0.1*100/1=110
for "nay": 0.9*700/9+0.1*700/1=140
Edit: corrected error in value of first expected benefit.
Edit: Hm, it is possible to reformulate Newcomb's problem in similar fashion. One of subjects (A) is asked whether ze chooses one box or two boxes, another subject (B) is presented with two boxes with content per A's choice. If they make identical decision, then they have what they choose, otherwise they get nothing.
And here's a reformulation of Counterfactual Mugging in the same vein. Find two subjects who don't care about each other's welfare at all. Flip a coin to choose one of them who will be asked to give up $100. If ze agrees, the other one receives $10000.
This is very similar to a rephrasing of the Prisoner's Dilemma known as the Chocolate Dilemma. Jimmy has the option of taking one piece of chocolate for himself, or taking three pieces and giving them to Jenny. Jenny faces the same choice: take one piece for herself or three pieces for Jimmy. This formulation makes it very clear that two myopically-rational people will do worse than two irrational people, and that mutual precommitment at the start is a good idea.
This stuff is still unclear to me, but there may be a post in here once we work it out. Would you like to cooperate on a joint one, or something?
Your second option still implicitly assumes that you're the only decider. In fact each of the possible deciders in each branch of the simulation would be making an evaluation of expected payoff -- and there are nine times as many in the "tails" branches.
There are twenty branches of the simulation, ten with nine deciders and ten with one decider. In the one-decider branches, the result of saying "yea" is a guaranteed $100; in the nine-decider branches, it's $1000 in the single case where everyone agrees, $700 in the single case where e...
If we're assuming that all of the deciders are perfectly correlated, or (equivalently?) that for any good argument for whatever decision you end up making, all the other deciders will think of the same argument, then I'm just going to pretend we're talking about copies of the same person, which, as I've argued, seems to require the same kind of reasoning anyway, and makes it a little bit simpler to talk about than if we have to speak as though that everyone is a different person but will reliably make the same decision.
Anyway:
Something is being double-coun...
(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)
Under the same rules, does it make sense to ask what is the error in refusing to pay in a Counterfactual Mugging? It seems like you are asking for an error in applying a decision theory, when really the decision theory fails on the problem.
you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation.
I'm not sure if this is relevant to the overall nature of the problem, but in this instance, the term 0.9*1000 is incorrect because you don't know if every other decider is going to be reasoning the same way. If you decide on "yea" on that basis, and the coin came up tails, and one of the other deciders says "nay", then the donation is $0.
Is it possible to insert the assumption that...
Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like "updating on consciousness" or "updating on the fact that you exist" in the first place; indeed, I've always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it's about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won't be any easier than just answering it for the case where all the deciders are copies of the same person.
(Though I'm still going to try to solve it.)
I would decide "nay". Very crudely speaking a fundamental change in viewpoints is involved. If I update on the new information regarding heads vs tales I must also adjust my view of what I care about. It is hard to describe in detail without writing an essay describing what probability means (which is something Wei has done and I would have to extend to allow for the way to describe the decision correctly if updating is, in fact, allowed.).
So let’s modify the problem somewhat. Instead of each person being given the “decider” or “non-decider” hat, we give the "deciders" rocks. You (an outside observer) make the decision.
Version 1: You get to open a door and see whether the person behind the door has a rock or not. Winning strategy: After you open a door (say, door A) make a decision. If A has a rock then say “yes”. Expected payoff 0.9 1000 + 0.1 100 = 910 > 700. If A has no rock, say “no”. Expected payoff: 700 > 0.9 100 + 0.1 1000 = 190.
Version 2: The host (we’ll call him ...
The devil is in the assumption that everyone else will do the same thing as you for presumably the same reasons. "Nay" is basically a better strategy in general, though it's not always right. The 90% odds of tails are correctly calculated.
Each decider will be asked to say "yea" or "nay". If the coin came up tails and all nine deciders say "yea", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says "yea", I donate only $100. If all deciders say "nay", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.
Suppose that instead of donating directly (presumably for tax reasons), you instead divide the contribution up among the deciders, and then let them pass i...
Un-thought-out-idea: We've seen in the Dutch-book thread that probability and decision theory are inter-reliant so maybe
Classical Bayes is to Naive Decision Theory
as
Bayes-we-need-to-do-anthropics is to UDT
Actually now that I've said that it doesn't seem to ground-breaking. Meh.
This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails.
Why 90% here? The coin is still fair, and anthropic reasoning should still remain, since you have to take into account the probability of receiving the observation when updating on it. Otherwise you become vulnerable to filtered evidence.
Edit: I take back the sentence on filtered evidence.
Edit 2: So it looks like the 90% probability estimate is actual...
The error in the reasoning is that it is not you who makes the decision, but the COD (collective of the deciders), which might be composed of different individuals in each round and might be one or nine depending on the coin toss.
In every round the COD will get told that they are deciders but they don't get any new information because this was already known beforehand.
P(Tails| you are told that you are a decider) = 0.9
P(Tails| COD is told that COD is the decider) = P(Tails) = 0.5
To make it easier to understand why the "yes" strategy is wrong, i...
I'm retracting this one in favor of my other answer:
http://lesswrong.com/lw/3dy/solve_psykoshs_nonanthropic_problem/d9r4
So saying "yea" gives 0.9 1000 + 0.1 100 = 910 expected donation.
This is simply wrong.
If you are a decider then the coin is 90% likely to have come up tails. Correct.
But it simply doesn't follow from this that the expected donation if you say yes is 0.9*1000 + 0.1*100 = 910.
To the contrary, the original formula is still true: 0.5*1000 + 0.5*100 = 550
So you should stil say "nay" and of course hope that everyone el...
If we are considering it from an individual perspective, then we need to hold the other individuals fixed, that is, we assume everyone else sticks to the tails plan:
In this case: 10% chance of becoming a decider with heads and causing a $1000 donation 90% chance of becoming a decider with tails and causing a $100 donation
That is 0.1 1000 + 0.9 100 = $190, which is a pretty bad deal.
If we want to allow everyone to switch, the difficulty is that the other people haven't chosen their action yet (or even a set of actions with fixed probability), so we can't ...
Precommitting to "Yea" is the correct decision.
The error: the expected donation for an individual agent deciding to precommit to "nay" is not 700 dollars. It's pr(selected as decider) * 700 dollars. Which is 350 dollars.
Why is this the case? Right here:
Next, I will tell everyone their status, without telling the status of others ... Each decider will be asked to say "yea" or "nay".
In all the worlds where you get told you are not a decider (50% of them - equal probability of 9:1 chance or a 1:9 chance) your precom...
Below is very unpolished chain of thoughts, which is based on vague analogy with symmetrical state of two indistinguishable quantum particles.
When participant is said ze is decider, ze can reason: let's suppose that before coin was flipped I changed places with someone else, will it make difference? If coin came up heads, than I'm sole decider and there are 9 swaps which make difference in my observations. If coin came up tails then there's one swap that makes difference. But if it doesn't make difference it is effectively one world, so there's 20 worlds I...
Once I have been told I am a decider, the expected payouts are:
For saying Yea: $10 + P$900 For saying Nay: $70 + Q$700
P is the probability that the other 8 deciders if they exist all say Yea conditioned on my saying Yay, Q is the probability that the other 8 deciders if they exist all say Nay conditioned on my saying Nay.
For Yay to be the right answer, to maximize money for African Kids, we manipulate the inequality to find P > 89% - 78%*Q The lowest value for P consistent with 0<=Q<=1 is P > 11% which occurs when Q = 100%.
What are P & Q...
Thank you for that comment. I think I understand the question now. Let me restate it somewhat differently to make it, I think, clearer.
All 10 of us are sitting around trying to pre-decide a strategy for optimizing contributions.
The first situation we consider is labeled "First" and quoted in your comment. If deciders always say yay, we get $1000 for heads and $100 for tails which gives us an expected $550 payout. If we all say nay, we get $700 for either heads or tails. So we should predecide to say "nay."
But then Charlie says "I think we should predecide to say "yay." 90% of the time I am informed I am a decider, there are 8 other deciders besides me, and if we all say "yay" we get $1000. 10% of the time I am informed I am a decider, we will only get $100. But when I am informed I am a decider, the expected payout is $910 if we all say yay, and only $700 if we all say nay."
Now Charlie is wrong, but its no good just asserting it. I have to explain why.
It is because Charlie has failed to consider the cases when he is NOT chosen as a decider. He is mistakenly thinking that since in those cases he is not chosen as a...
Is this some kind of inverse Monty Hall problem? Counterintuitively, the second solution is incorrect.
If everyone pledges to answer "yea" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation.
This is correct. There are 10 cases in which we have a single decider and win 100 and there are 10 cases in which we have a single non-decider and win 1000, and these 20 cases are all equally likely.
you should do a Bayesian update: the coin is 90% likely to have come up tails.
By calculation (or by drawing a decision tree...
A note on the problem statement: You should probably make it more explicit that mixed strategies are a bad idea here (due to the 0-if-disagreement clause). I spent a bit wondering why you restricted to pure strategies and didn't notice it until I actually attempted to calculate an optimal strategy under both assumptions. (FWIW, graphing things, it seems that indeed, if you assume both scenarios are equally likely, pure "nay" is best, and if you assume 9 deciders is 90% likely, pure "yea" is best.)
I'm assuming all deciders are coming to the same best decision so no worries about deciders disagreeing if you change your mind.
I'm going to be the odd-one-out here and say that both answers are correct at the time they are made… if you care far more (which I don't think you should) about African kids in your own Everett branch (or live in a hypothetical crazy universe where many worlds is false).
(Chapter 1 of Permutation City spoiler, please click here first if not read it yet, you'll be glad you did...): Jura lbh punatr lbhe zvaq nsgre orvat gbyq, lbh jv...
We could say that the existence of pre-agreed joint strategies invalidates standard decision theory.
It's easy to come up with scenarios where coordination is so valuable that you have to choose not to act on privileged information. For example, you're meeting a friend at a pizzeria, and you spot a better-looking pizzeria two blocks away, but you go to the worse one because you'd rather eat together than apart.
Psy-Kosh's problem may not seem like a coordination game, but possibilities for coordination can be subtle and counter-intuitive. See, for example, ...
Stream of conciousness style answer. Not looking at other comments so I can see afterwards if my thinking is the same as anyone else's.
The argument for saying yea once one is in the room seems to assume that everyone else will make the same decision as me, whatever my decision is. I'm still unsure whether this kind of thinking is allowed in general, but in this case it seems to be the source of the problem.
If we take the opposite assumption, that the other decisions are fixed, then the problem depends on those decisions. If we assume that all the others (i...
Initially either 9 or 1 of the 10 people will have been chosen with equal likelihood, meaning I had a 50% chance of being chosen. If being chosen means I should find 90% likelihood that the coin came up tails, then not being chosen should mean I find 90% likelihood that the coin came up heads (which it does). If that were the case, I'd want nay to be what the others choose (0.9 100 + 0.1 1000 = 190 < 700). Since both branches are equally likely (initially), and my decision of what to choose in the branch in which I choose corresponds (presumably) to t...
This struck an emotional nerve with me, so I'm going to answer as if this were an actual real-life situation, rather than an interesting hypothetical math problem about maximizing expected utility.
IMHO, if this was a situation that occurred in real life, neither of the solutions is correct. This is basically another version of Sophie's Choice. The correct solution would be to punch you in the face for using the lives of children as pawns in your sick game and trying to shift the feelings of guilt onto me, and staying silent. Give the money or not as you se...
The source is here. I'll restate the problem in simpler terms:
You are one of a group of 10 people who care about saving African kids. You will all be put in separate rooms, then I will flip a coin. If the coin comes up heads, a random one of you will be designated as the "decider". If it comes up tails, nine of you will be designated as "deciders". Next, I will tell everyone their status, without telling the status of others. Each decider will be asked to say "yea" or "nay". If the coin came up tails and all nine deciders say "yea", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says "yea", I donate only $100. If all deciders say "nay", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.
First let's work out what joint strategy you should coordinate on beforehand. If everyone pledges to answer "yea" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation. Pledging to say "nay" gives 700 for sure, so it's the better strategy.
But consider what happens when you're already in your room, and I tell you that you're a decider, and you don't know how many other deciders there are. This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation. This looks more attractive than the 700 for "nay", so you decide to go with "yea" after all.
Only one answer can be correct. Which is it and why?
(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)