Your first question is different depending on some things. If you play the lottery for the usual reason (because people are irrational and cannot do math) then it is safe to say that you will not only be better off not doing any updating if you magically end up in the winners' pool, but you probabilistically have no idea what Bayes' theorem is. This is me looking at you, hypothetical irrational person.
However, if you buy a lottery ticket in order to test whether you're in a simulation, then you have to consider that one of four things is true:
I think #4 is the only interesting possibility, so it might make sense to buy one lottery ticket, but that's not really very clever as tests go. I wouldn't recommend more than that, though; they're addicting. You could also attempt to perform private miracles to see if they work, though that's even less clever. I admit to having done this.
But it brings up more questions, like why are we able to talk about simulations at all?
Because things are connected, and to make humans unable to talk or conceive of simulations would require great changes to the simulated history in many other places.
In the scenario where future humans are running the sim to simulate their own (exact or approximate) past history, simulated humans have to know and talk about simulations, because they are simulating the original humans who proceeded to build simulations!
I have just finished reading the section on anthropic bias in Nassim Taleb's book, The Black Swan. In general, the book is interesting to compare to the sort of things I read on Less Wrong; its message is largely very similar, except less Bayesian (and therefore less formal-- at times slightly anti-formal, arguing against misleading math).
Two points concerning anthropic weirdness.
First:
If we win the lottery, should we really conclude that we live in a holodeck (or some such)? From real-life anthropic weirdness:
It seems to me that the right way of approaching the question is: before buying the lottery ticket, what belief-forming strategy would we prefer ourselves to have? (Ignore the issue of why we buy the ticket, of course.) Or, slightly different: what advice would you give to other people (for example, if you're writing a book on rationality that might be widely read)?
"Common sense" says that it would be quite silly to start believing some strange theory, just because I win the lottery. However, Bayes says that if we assign greater than 10-8 prior probability to "strange" explanations of getting a winning lottery ticket, then we should prefer them. In fact, we may want to buy a lottery ticket to test those theories! (This would be a very sensible test, which would strongly tend to give the right result.)
However, as a society, we would not want lottery-winners to go crazy. Therefore, we would not want to give the advice "if you win, you should massively update your probabilities".
(This is similar to the idea that we might be persuaded to defect in Prisoner's Dilemma if we are maximizing our personal utility, but if we are giving advice about rationality to other people, we should advise them that cooperating is the optimal strategy. In a somewhat unjustified leap, I suppose we should take the advice we would give to others in such matters. But I suppose that position is already widely accepted here.)
On the other hand, if we were in a position to give advice to people who might really be living in a simulation, it would suddenly be good advice!
Second:
Taleb discusses an interesting example of anthropic bias:
You'll have to read the chapter if you want to know exactly what "argument" is being discussed, but the general point is (hopefully) clear from this passage. If an event was a necessary prerequisite for our existence, then we should not take our survival of that event as evidence for a high probability of survival of such events. If we remember surviving a car crash, we should not take that to increase our estimates for surviving a car crash. (Instead, we should look at other car crashes.)
This conclusion is somewhat troubling (as Taleb admits). It means that the past is fundamentally different from the future! The past will be a relatively "safe" place, where every event has led to our survival. The future is alien and unforgiving. As is said in the story The Hero With A Thousand Chances:
Now, Taleb is saying that we are that hero. Scary, right?
On the other hand, it seems reasonable to be skeptical of a view which presents difficulties generalizing from the past to the future. So. Any opinions?