However, Bayes says that if we assign greater than 10^-8 prior probability to "strange" explanations
Well, don't do that then. Does 10^-8, besides being the chances of a ticket winning a typical big lottery, carry in addition the implied meaning "unimaginably small", "so small that one must consider all manner of weird other possibilities that in fact we have no way of assessing the probability of, but 10^-8 is so extraordinarily small that surely they must be considered alongside the simple explanation that my ticket won"? "How could we ever be 10^-8 sure of anything?"
Because I would dispute that. Consider someone who has a lottery ticket in their hand, for a draw about to be announced, with 1 chance in 100,000,000 of having the winning numbers. If their numbers are drawn, they must overcome 80dB of prior improbability to be persuaded of that. (It does not matter whether they know that is what they are doing: they are nonetheless doing that.) An impossible task? No, almost all jackpots in the Euromillions lottery (probability 1/76275360) are claimed. Ordinary people, successfully comparing two strings of seven numbers and getting the right answer. It is news when a Euromillions jackpot goes unclaimed for as little as one week.
One of the alternative hypotheses that one must consider, of course, is the mundane "I am mistaken: this is not a winning ticket, despite the fact that I have stared at the two sets of numbers and the date over and over and they still appear to be identical." I don't know how many false positives the claims line gets. But the jackpot is awarded at least every few weeks, and every time it is claimed by people who were not mistaken.
There is no such thing as a small number.
There are two questions we must consider, according to Bayes: What is the prior probability of living in a simulation, and given we live in a simulation, what is the probability of winning the lottery?
We can invoke your argument at either point, and I'm not sure which you intended.
-- Is 10^-8 enough evidence to overcome the prior improbability? In this case, "prior" means just before we bought the ticket; so we have a lifetime of evidence to help us decide whether we live in a simulation. (Determining this may be difficult, of course, but the lo...
I have just finished reading the section on anthropic bias in Nassim Taleb's book, The Black Swan. In general, the book is interesting to compare to the sort of things I read on Less Wrong; its message is largely very similar, except less Bayesian (and therefore less formal-- at times slightly anti-formal, arguing against misleading math).
Two points concerning anthropic weirdness.
First:
If we win the lottery, should we really conclude that we live in a holodeck (or some such)? From real-life anthropic weirdness:
It seems to me that the right way of approaching the question is: before buying the lottery ticket, what belief-forming strategy would we prefer ourselves to have? (Ignore the issue of why we buy the ticket, of course.) Or, slightly different: what advice would you give to other people (for example, if you're writing a book on rationality that might be widely read)?
"Common sense" says that it would be quite silly to start believing some strange theory, just because I win the lottery. However, Bayes says that if we assign greater than 10-8 prior probability to "strange" explanations of getting a winning lottery ticket, then we should prefer them. In fact, we may want to buy a lottery ticket to test those theories! (This would be a very sensible test, which would strongly tend to give the right result.)
However, as a society, we would not want lottery-winners to go crazy. Therefore, we would not want to give the advice "if you win, you should massively update your probabilities".
(This is similar to the idea that we might be persuaded to defect in Prisoner's Dilemma if we are maximizing our personal utility, but if we are giving advice about rationality to other people, we should advise them that cooperating is the optimal strategy. In a somewhat unjustified leap, I suppose we should take the advice we would give to others in such matters. But I suppose that position is already widely accepted here.)
On the other hand, if we were in a position to give advice to people who might really be living in a simulation, it would suddenly be good advice!
Second:
Taleb discusses an interesting example of anthropic bias:
You'll have to read the chapter if you want to know exactly what "argument" is being discussed, but the general point is (hopefully) clear from this passage. If an event was a necessary prerequisite for our existence, then we should not take our survival of that event as evidence for a high probability of survival of such events. If we remember surviving a car crash, we should not take that to increase our estimates for surviving a car crash. (Instead, we should look at other car crashes.)
This conclusion is somewhat troubling (as Taleb admits). It means that the past is fundamentally different from the future! The past will be a relatively "safe" place, where every event has led to our survival. The future is alien and unforgiving. As is said in the story The Hero With A Thousand Chances:
Now, Taleb is saying that we are that hero. Scary, right?
On the other hand, it seems reasonable to be skeptical of a view which presents difficulties generalizing from the past to the future. So. Any opinions?