I have an interesting solution to the non-anthropic problem. Firstly, the reward of 0 for voting differently is ignored in all the calculations, as it is assumed the other agent is acting identically. Therefore, its value is irrelevant (unless of course it becomes so high that the agents start deliberately employing randomisation in an attempt to try and vote differently, which would distort the problem).
However, consider what happens if you set the value to 9. In this case, you can forget about the other agent entirely. Voting heads if the coin was tails always loses exactly 1, while voting tails if the coin was heads loses 3. Since no method gives a probability higher than 3/4 for the coin being tails, the answer is simple: vote heads. Of course, this is a different problem, but it highlights the fact that any method which tells you to vote tails, and yet does not include the 0 anywhere in the calculations (since it assumes the agents can't possibly vote differently) is clearly suspect.
? multiply what by that zero? There's so many things you might mean by that, and if even one of them made any sense to me I'd just assume that was it, but as it stands I have no idea. Not a very helpful comment.
Well, suppose you're doing an expected utility calculation, and the utility of outcome one is U1, the utility of outcome 2 is U2, and so on.
Then your expected utility looks like (some stuff)*U1 + (some other stuff)*U2, and so on. The stuff in parentheses is usually the probability of outcome N occurring, but some systems might include a correction based on collective decision-making or something, and that's fine.
Now suppose that U1=0. Then your expected utility looks like (some stuff)*0 + (some other stuff)*U2, and so on. Which is equal to (that other stuff)*U2, etc, because you just multiplied the first term by 0. So the zero is in there. You've just multiplied by it.
Ok, thanks, that makes more sense than anything I'd guessed.
There's a difference between shortcutting a calculation and not accounting for something in the first place. In the debate between all the topics mentioned in the paper (e.g. SSI/SSA, split responsibility, precommitments and so on) not one method would give a different answer if that 0 was a 5, a 9, or a -100. It's not because they're shortcutting the maths, it's because, as I said in my first comment, they assume that it's effectively not possible for the two people to vote differently anyway. Which is fine in the abstract, even if it's a little suspect in practice (since this, for once, is a quite realisable experiment).
I'll rephrase my final line then: "If a method says to vote tails, and yet would give the same answer with the 0 changed to a 9, then it is clearly suspect". Incidentally I don't know of a method which says "vote tails" and would give a different answer if you changed the 0 to a 9 either.
I think the reason I didn't get your comment originally is that the first thing I do with this problem is work with the differences - which in this case means subtracting everything from 10 and think in terms of money lost on bad votes, not absolute values. So I wouldn't be multiplying by 0. It's neither better nor worse, just explains why I didn't know what you meant.
Oh, okay. Looks like I didn't really understand your point when I commented :)
Perhaps I still don't - you say "no method gives a probability higher than 3/4 for the coin being tails," but you've in fact been given information that should cause you to update that probability. It's like someone had a bag with 10 balls in it. That person flipped a coin, and if the coin was heads the bag has 9 black balls and 1 white ball, but if the coin was tails the bag has 9 white balls and 1 black ball. They reach into the bag and hand you a ball at random, and it's black - what's the probability the coin was heads?
If you reward disagreement, then what you're really rewarding in this case are mixed (probabilistic) actions. The reward only pays out if the coin landed tails, so that there's someone else to disagree with. So people will give what seems to them to be the same honest answer when you change the result of disagreeing from 0 to 0+epsilon. But when the payoff from disagreeing passes the expected payoff of honesty, agents will pick mixed actions.
To be more precise: If we simplify a little and only let them choose 50/50 if they want to disagree, then we have that the expected utility of honesty is P(heads)U(choice,heads) + P(tails)U(choice,heads), while the expected utility of coin-flipping is pretty much P(heads)U(average,heads) + P(tails)*U(disagree,tails). These will pass each other at different values of U(disagree, tails) depending on that you think P(heads) and P(tails) are, and also depending on which choice you think is best.
I tried to cover what you're talking about with my statement in brackets at the end of the first paragraph. Set the value for disagreeing too high and you're rewarding it, in which case people start deliberately making randomised choices in order to disagree. Too low and they ought to be going out of their way to try and agree above all else - except there's no way to do that in practice, and no way not to do it in the abstract analysis that assumes they think the same. A value of 9 though is actually in between these two cases - it's exactly the average of the two agreement options, and it neither punishes nor rewards disagreement. It treats disagreement "fairly", and in doing so entirely un-links the two agents. Which is exactly why I picked it, and why it simplifies the problem. Again I think I'm thinking of these values relatively while you're thinking absolutely - a value of epsilon for disagreeing is not rewarding disagreeing slightly, it's still punishing it severely relative to the other outcomes.
To me what it illustrates is that the linking between the two agents is something of an illusion in the first place. Punishing disagreement encourages the agents to collaborate on their vote, but the problem provides no explicit means for them to do so. Introducing an explicit means to co-operate, such as pre-commitment or having the agents run identical decision algorithms, would dissolve the problem into a clear solution (actually, explicitly identical algorithms makes it a version Newcomb's Paradox, but that's at least a well studied problem). It's the ambiguity of how to co-operate combined with the strong motivation, lack of explicit means, and abundance of theoretical means to hand-wave agreement that creates the paradox.
As for the stuff you say about the probability and the bucket of coloured balls, I get all that. The original probability of the coin flip was 1/2 each way. The evidence that you've been asked to vote makes the subjective likelihood of tails 2/3. Also somehow the number 3/4 appears in the SSA solution to the Sleeping Beauty problem (which to me seems just flat-out wrong, and enough for me to write off that method unless I see a very good defence of it), which made me worry that somewhere out there was a method which somehow comes up with 3/4. So I covered my bases by saying "no method gives probability higher than 3/4", which was the minimum neccesary requirement and what I figured was fairly safe statement. The reality is 2/3 is simply just correct for the subjective probability of tails, for reasons like you say, and maybe I just confuse things by mucking about trying to cover all possible bad solutions. It is I admit a little confusing to talk about whether anything is "more than 3/4" when the only two values under serious consideration are the a-priori 1/2 and the subjective posterior 2/3.
Yeah, I didn't know exactly what problem statement you were using (the most common formulation of the non-anthropic problem I know is this one), so I didn't know "9" was particularly special.
Though since the point at which I think randomization becomes better than honesty depends on my P(heads) and on what choice I think is honest. So what value of the randomization-reward is special is fuzzy.
I guess I'm not seeing any middle ground between "be honest," and "pick randomization as an action," even for naive CDT where "be honest" gets the problem wrong.
which made me worry that somewhere out there was a method which somehow comes up with 3/4.
Somewhere in Stuart Armstrong's bestiary of non-probabilistic decision procedures you can get an effective 3/4 on the sleeping beauty problem, but I wouldn't worry about it - that bestiary is silly anyhow :P
I know that the right way for me to handle this is to read the paper, but it might be helpful to expand your summary to define SSA and SIA, and causal versus evidential agents? (And presumably EDT versus CDT too, though I already know those.)
SIA and SSA are defined in http://lesswrong.com/lw/892/anthropic_decision_theory_ii_selfindication/
(post http://lesswrong.com/lw/891/anthropic_decision_theory_i_sleeping_beauty_and/ sets up the Sleeping Beauty problem).
I've already read your (excellent) paper "Anthropic Decision Theory". Is the FHI technical report basically a summary of this, or does it contain additional results? (Just want to know before taking the time to read the report.)
excellent
Thanks :-)
This tech report is more a motivation as to why anthropic decision theory might be needed - it shows that you can reach the same decision in different ways, and that SIA or SSA aren't enough to fix your decision. It's philosophically useful, but doesn't give any prescriptive results.
I drew the distinction earlier between subjective probability and betting behavior with a tale rather like the non-anthropic sleeping beauty table presented here.
It seems to me like the only difference between SSA + total, and SIA + divided, is which of these you're talking about when you speak of probability (SSA brings you to subjective probability, which must then be corrected to get proper bets; SIA gives you the right bets to make, which must be corrected to get the proper subjective probability)
To deal with these ideas correctly, you need to use anthropic decision theory.
The best current online version of this is on less wrong, split into six articles (I'm finishing up an improved version for hopefully publication):
It seems to me like the only difference between SSA + total, and SIA + divided, is which of these you're talking about when you speak of probability
Doesn't the isomorphism between them only hold if your SSA reference class is exactly the set of agents responsible for your decision?
(This question is also for Stuart -- by the way, thanks for writing this, the exposition of the divided responsibility idea was useful!)
In the anthropic decision theory formalism (see the link I posted in answer to Luke_A_Somers) SSA-like behaviour emerges from average utilitarianism (also selfish agents, but that's more complicated). The whole reference class complexity, in this context, is the complexity of deciding the class of agents that you average over.
Yes, I haven't studied the LW sequence in detail, but I've read the arxiv.org draft, so I'm familiar with the argument. :-) (Are there important things in the LW sequence that are not in the draft, so that I should read that too? I remember you did something where agents had both a selfish and a global component to their utility function, that wasn't in the draft...) But from the techreport I got the impression that you were talking about actual SSA-using agents, not about the emergence of SSA-like behavior from ADT; e.g. on the last page, you say
Finally, it should be noted that a lot of anthropic decision problems can be solved without needing to work out the anthropic probabilities and impact responsibility at all (see for instance the approach in (Armstrong, 2012)).
which sounds as if you're contrasting two different approaches in the techreport and in the draft, not as if they're both about the same thing?
[And sorry for misspelling you earlier -- corrected now, I don't know what happened there...]
What I really meant is - the things in the tech report are fine as far as they go, but the Anthropic decision paper is where the real results are.
I agree with you that the isomorphism only holds if your reference class is suitable (and for selfish agents, you need to mess around with precommitments). The tech report does make some simplifying assumptions (as it's point was not to find the full condition for rigorous isomorphism results, but to illustrate that anthropic probabilities are not enough on their own).
It seems to me that you're trying to invent a theory of kin selection between agents in possible worlds. Biology has a rich theory for how agents which resemble each other behave towards each other - kin selection. Biology too has to deal with other ways that agents can come to resemble each other - e.g. mimicry, convergent evolution and chance. However, in terms of producing cooperative behaviour, relatedness is the big one.
A technical report of the Future of Humanity Institute (authored by me), on why anthropic probability isn't enough to reach decisions in anthropic situations. You also have to choose your decision theory, and take into account your altruism towards your copies. And these components can co-vary while leaving your ultimate decision the same - typically, EDT agents using SSA will reach the same decisions as CDT agents using SIA, and altruistic causal agents may decide the same way as selfish evidential agents.
Anthropics: why probability isn't enough
This paper argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These issues, generally irrelevant for non-anthropic problems, come to the forefront in anthropic situations and are at least as important as the anthropic probabilities: indeed they can erase the difference between different theories of anthropic probability, or increase their divergence. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.