JGWeissman comments on Avoiding doomsday: a "proof" of the self-indication assumption - Less Wrong

18 Post author: Stuart_Armstrong 23 September 2009 02:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (228)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 08 April 2010 04:32:30AM 1 point [-]

If I repeat the door experiment many times using pi's millionth bit, whoever is behind the red door must die, and whoever's behind the blue doors must survive.

That would be like repeating the coin version of the experiment many times, using the exact same coin (in the exact same condition), flipping it in the exact same way, in the exact same environment. Even though you don't know all these factors of the initial conditions, or have the computational power to draw conclusions from it, the coin still lands the same way each time.

Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi. or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?

Comment author: cupholder 08 April 2010 05:41:09AM *  0 points [-]

Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi.

Yes, I think that would work - if I remember right, zeroes and ones are equally likely in pi's binary expansion, so it would successfully mimic flipping a coin with random initial conditions. (ETA: this is interesting. Apparently pi's not yet been shown to have this property. Still, it's plausible.)

or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?

This would also work, so long as your bag of facts is equally distributed between true facts and false facts.