Stuart_Armstrong comments on Updateless anthropics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
I remember you linked me to Radford Neal's paper (pdf) on Full Non-indexical Conditioning. I think FNC is a much nicer way to think about problems like these than SSA and SIA, but I guess you disagree?
To save others from having to wade through the paper, which is rather long, I'll try to explain what FNC means:
First, let's consider a much simpler instance of the Doomsday Argument: At the beginning of time, God tosses a coin. If heads then there will only ever be one person (call them "M"), who is created, matures and dies on Monday, and then the world ends. If tails then there will be two people, one ("M") who lives and dies on Monday and another ("T") on Tuesday. As this is a Doomsday Argument, we don't require that T is a copy of M.
M learns that it's Monday but is given no (other) empirical clues about the coin. M says to herself "Well, if the coin is heads then I was certain to find myself here on Monday, but if it's tails then there was a 1/2 chance that I'd find myself experiencing a Tuesday. Applying Bayes' theorem, I deduce that there's a 2/3 chance that the coin is heads, and that the world is going to end before tomorrow."
Now FNC makes two observations:
M takes these points to heart. Rather than updating on "it is Monday" she instead updates on "there once was a person who experienced [complete catalogue of M's mental state] and that person lived on Monday."
If we ignore the (at best) remote possibility that T has exactly the same experiences as M (prior to learning which day it is) then the event above is independent of the coin toss. Therefore M should calculate a posterior probability of 1/2 that the coin is heads.
On discovering that it's Monday, M gains no evidence that the end of the world is nigh. Notice that we've reached this conclusion independently of decision theory.
If M is 'altruistic' towards T, valuing him as much as she values herself, then she should be prepared to part with one cube of chocolate in exchange for a guarantee that he'll get two if he exists. If M is 'selfish' then the exchange rate changes from 1:2 to 1:infinity. These exchange rates are not probabilities. It would be very wrong to say something like "the probability that M gives to T's existence only makes sense when we specify M's utility function, and it in particular it changes from 1/2 to 0 if M switches from 'altruistic' to 'selfish'".
I used to be a great believer in FNC, but I've found it's flawed. The main problem is that it's not time-consistent.
For instance, if you start with some identical copies, and they are each going to flip a coin twenty times. Now FNC says that before they flip a coin, they should not believe that they are in a large universe, because they are identical.
However, after they have flipped, they will be nearly certainly very different, and so will believe that they are in a large universe.
So they know that after they flip the coin, their probability of being in a large universe will have increased, no matter what they see.
The problem isn't just restricted to when you start with identical copies; whenever you increase your memory size by one bit, say, then FNC will be slightly inconsistent (because (1+e)^-n is approximately 1-ne for small e, but not exactly).
Yes, that is definitely a problem! The variation of FNC which I described in the final section of my UDT post has each person being allowed to help themselves to uniform random number in [0,1] - i.e. infinitely many random "coin flips", as long as they don't try to actually use the outcomes.
This solves the problem you mention, but others arise:
Actually, using (2), and variations alpha to gamma, I think I can construct a continuum of variations on Sleeping Beauty which stretch from one where the answer is unambiguously 1/3 (or 1/11 as in the link) to one where it's unambiguously 1/2.
OK, I recant and denounce myself - the idea that any sensible variation of the Sleeping Beauty problem must have a 'canonical' answer is wrong, and FNC is broken.
Very admirable stance to take :-) I wish I could claim I found the problem and immediately renounced SIA and FNC, but it was a long process :-)
Btw, a variant similar to your alpha to gamma was presented in my post http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication ; I found the problem with that in http://lesswrong.com/lw/4fl/dead_men_tell_tales_falling_out_of_love_with_sia/