If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.
Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”
You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.
But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?
I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.
This is related to the Sleeping Beauty Problem, and in general the answer depends what you're trying to do with "probability". For lots and lots more, Bostrom's PhD thesis is very detailed: Anthropic Bias: Observation Selection Effects in Science and Philosophy.
Bostrom's Observation Selection Effects and Human Extinction Risks paper is less philosophical and sounds like it's more relvant to the paper you're working on.