tl;dr: in which I apply intuition to the anthropic trilemma, and it all goes horribly, horribly wrong
Some time ago, Eliezer constructed an anthropic trilemma, where standard theories of anthropic reasoning seemed to come into conflict with subjective anticipation. rwallace subsequently argued that subjective anticipation was not ontologically fundamental, so we should not expect it to work out of the narrow confines of everyday experience, and Wei illustrated some of the difficulties inherent in "copy-delete-merge" types of reasoning.
Wei also made the point that UDT shifts the difficulty in anthropic reasoning away from probability and onto the utility function, and ata argued that neither the probabilities nor the utility function are fundamental, that it was the decisions that resulted from them that were important - after all, if two theories give the same behaviour in all cases, what grounds do we have for distinguishing them? I then noted that this argument could be extended to subjective anticipation: instead of talking about feelings of subjective anticipation, we could replace it by questions such as "would I give up a chocolate bar now for one of my copies to have two in these circumstances?"
In this post, I'll start by applying my intuitive utility/probability theory to the trilemma, to see what I would decide in these circumstance, and the problems that can result. I'll be sticking with classical situations rather than quantum, for simplicity.
So assume a (classical) lottery where I have ticket with million to one odds. The trilemma presented a lottery winning trick: set up the environment so that if ever I did win the lottery, a trillion copies of me would be created, they would experience winning the lottery, and then they will be merged/deleted down to one copy again.
So that's the problem; what's my intuition got to say about it? Now, my intuition claims there is a clear difference between my personal and my altruistic utility. Whether this is true doesn't matter, I'm just seeing whether my intuitions can be captured. I'll call the first my indexical utility ("I want chocolate bars") and the second my non-indexical utility ("I want everyone hungry to have a good meal"). I'll be neglecting the non-indexical utility, as it is not relevant to subjective anticipation.
Now, my intuitions tell me that SIA is the correct anthropic probability theory. It also tells me that having a hundred copies in the future all doing exactly the same thing is equivalent with having just one: therefore my current utility means I want to maximise the average utility of my future copies.
If I am a copy, then my intuitions tell me I want to selfishly maximise my own personal utility, even at the expense of my copies. However, if I were to be deleted, I would transfer my "interest" to my remaining copies. Hence my utility as a copy is my own personal utility, if I'm still alive in this universe, and the average of the remaining copies, if I'm not. This also means that if everyone is about to be deleted/merged, then I care about the single remaining copy that will come out of it, equally with myself.
Now I've setup my utility and probability; so what happens to my subjective anticipation in the anthropic trilemma? I'll use the chocolate bar as a unit of utility - because, as everyone knows, everybody's utility is linear in chocolate, this is just a fundamental fact about the universe.
First of all, would I give up a chocolate bar now for two to be given to one of the copies if I win the lottery? Certainly not, this loses me 1 utility and only gives me 2/million trillion in return. Would I give up a bar now for two to be given to every copy if I lose the lottery? No, this loses me 1 utility and only give me 2/million in return.
So I certainly do not anticipate winning the lottery through this trick.
Would I give up one chocolate bar now, for two chocolate bars to the future merged me if I win the lottery? No, this gives me an expected utility of -1+2/million, same as above.
So I do not anticipate having won the lottery through this trick, after merging.
Now let it be after the lottery draw, after the possible duplication, but before I know whether I've won the lottery or not. Would I give up one chocolate bar now in exchange for two for me, if I had won the lottery (assume this deal is offered to everyone)? The SIA odds say that I should; I have an expected gain of 1999/1001 ≈ 2.
So once the duplication has happened, I anticipate having won the lottery. This causes a preference reversal, as my previous version would pay to have my copies denied that choice.
Now assume that I have been told I've won the lottery, so I'm one of the trillion duplicates. Would I give up a chocolate bar for the future merged copy having two? Yes, I would, the utility gain is 2-1=1.
So once I've won the lottery, I anticipate continuing having won the lottery.
So, to put all these together:
- I do not anticipate winning the lottery through this trick.
- I do not anticipate having won the lottery once the trick is over.
- However, in the middle of the trick, I anticipate having won the lottery.
- This causes a money-pumpable preference reversal.
- And once I've won the lottery, I anticipate continuing to have won the lottery once the trick is over.
Now, some might argue that there are subtle considerations that make my behaviour the right one, despite the seeming contradictions. I'd rather say - especially seeing the money-pump - that my intuitions are wrong, very wrong, terminally wrong, just as non-utilitarian decision theories are.
However, what I started with was a perfectly respectable utility function. So we will need to add other consideration if we want to get an improved consistent system. Tomorrow, I'll be looking at some of the axioms and assumptions one could use to get one.
I'm pretty certain at this point that most of the confusion results from mixing up probability and subjective anticipation. Subjective anticipation is a loose heuristic, while probability is much more clearly an element of (updateless) normative decision criterion. In particular, observations update anticipation, not probability.
Sometimes, anticipation conflicts with probability, and instead of seeing them as opposed, it's better to accept that anticipation is indeed so and so, and if you were treating it as probability, you would make such and such incorrect decisions, while probability is different, and gives the correct decisions.
So you anticipate something other than your anticipation? That seems like a definition went wrong somewhere.