Regarding the first question: We are making an argument here about what kind of advice we would want to give to people, and we are also considering a hypothesis under which most people aren't "conscious". Using a functionalist philosophy of mind, perhaps we can say that we cannot expect to change the behavior of non-conscious people by giving them advice. So then maybe there is no reason to want to advise anyone against updating in favor of a simulation hypothesis.
On the other hand, there is the hypothesis that everyone is "conscious", but we are in a simulation in which one particular person has good things happen to them. In that case, every time someone wins the lottery, we want to update in favor of a simulation hypothesis under which that person is special.
I have just finished reading the section on anthropic bias in Nassim Taleb's book, The Black Swan. In general, the book is interesting to compare to the sort of things I read on Less Wrong; its message is largely very similar, except less Bayesian (and therefore less formal-- at times slightly anti-formal, arguing against misleading math).
Two points concerning anthropic weirdness.
First:
If we win the lottery, should we really conclude that we live in a holodeck (or some such)? From real-life anthropic weirdness:
It seems to me that the right way of approaching the question is: before buying the lottery ticket, what belief-forming strategy would we prefer ourselves to have? (Ignore the issue of why we buy the ticket, of course.) Or, slightly different: what advice would you give to other people (for example, if you're writing a book on rationality that might be widely read)?
"Common sense" says that it would be quite silly to start believing some strange theory, just because I win the lottery. However, Bayes says that if we assign greater than 10-8 prior probability to "strange" explanations of getting a winning lottery ticket, then we should prefer them. In fact, we may want to buy a lottery ticket to test those theories! (This would be a very sensible test, which would strongly tend to give the right result.)
However, as a society, we would not want lottery-winners to go crazy. Therefore, we would not want to give the advice "if you win, you should massively update your probabilities".
(This is similar to the idea that we might be persuaded to defect in Prisoner's Dilemma if we are maximizing our personal utility, but if we are giving advice about rationality to other people, we should advise them that cooperating is the optimal strategy. In a somewhat unjustified leap, I suppose we should take the advice we would give to others in such matters. But I suppose that position is already widely accepted here.)
On the other hand, if we were in a position to give advice to people who might really be living in a simulation, it would suddenly be good advice!
Second:
Taleb discusses an interesting example of anthropic bias:
You'll have to read the chapter if you want to know exactly what "argument" is being discussed, but the general point is (hopefully) clear from this passage. If an event was a necessary prerequisite for our existence, then we should not take our survival of that event as evidence for a high probability of survival of such events. If we remember surviving a car crash, we should not take that to increase our estimates for surviving a car crash. (Instead, we should look at other car crashes.)
This conclusion is somewhat troubling (as Taleb admits). It means that the past is fundamentally different from the future! The past will be a relatively "safe" place, where every event has led to our survival. The future is alien and unforgiving. As is said in the story The Hero With A Thousand Chances:
Now, Taleb is saying that we are that hero. Scary, right?
On the other hand, it seems reasonable to be skeptical of a view which presents difficulties generalizing from the past to the future. So. Any opinions?