Suppose that if the Riemann Hypothesis were true, then some complicated but relatively well-accepted corollary involving geometric superstring theory and cosmology means that the universe would contain 10^500 times more observers. Suppose furthermore that the corollary argument ( RH ==> x10^500 observers) is accepted to be true with a very high probability (say, 99.9%).
A presumptuous philosopher now has a "proof" of the Riemann Hypothesis. Just use the self-indication assumption: reason as if you are an observer chosen at random from the set of all possible observers (in your reference class). Since almost all possible observers arise in "possible worlds" where RH is true, you are almost certainly one of these.
Do we believe this argument?
One argument against it is that, if RH is false, then the "possible worlds" where it is true are not possible. They're not just not actual, they are as ridiculous as worlds where 1+1=3.
Furthermore, the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically; otherwise, they act as a "collective sucker". Unless you have reason to believe you are a "special" member of Ω, you should assume that your best move is to reason as if you are a generic member of Ω, i.e. anthropically. When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible. When most of the members of Ω arise from non-actual impossible worlds, something seems to have gone wrong. Observers who would only exist in logically impossible worlds can't make bets, so the "collective sucker" arguments don't really work.
If you think that the above argument in favor of RH is a little bit fishy, then you might want to ponder Katja's ingenious SIA great filter argument. Most plausible explanations for a future great filter are logical facts, not empirical ones. The difficulty of surviving a transition through technological singularities, if it convergently causes non-colonization, is some logical fact, derivable by a sufficiently powerful mind. A tendency for advanced civilizations to "realize" that expansionism is pointless is a logical fact. I would argue that anthropic considerations should not move us on such logical facts.
Therefore, if you still buy Katja's argument, and you don't endorse anthropic reasoning as a valid method of mathematical proof, you need to search for an empirical fact that causes a massive great filter just after the point in civilization that we're at.
The supply of these is limited. Most explanations of the great filter/fermi paradox postulate some convergent dynamic that occurs every time a civilization gets to a certain level of advancement; but since these are all things you could work out from first principles, e.g. by Monte Carlo simulation, they are logical facts. Some other explanations where our background facts are false survive, e.g. the Zoo Hypothesis and the Simulation Hypothesis.
Let us suppose that we're not in a zoo or a simulation. It seems that the only possible empirical great filter cause that fits the bill is something that was decided at the very beginning of the universe; some contingent fact about the standard model of physics (which, according to most physicists, was some symmetry breaking process, decided at random at the beginning of the universe). Steven0461 points out that particle accelerator disasters are ruled out, as we could in principle colonize the universe using Project Orion spaceships right now, without doing any more particle physics experiments. I am stumped as to just what kind of fact would fit the bill. Therefore the Simulation Hypothesis seems to be the biggest winner from Katja's SIA doomsday argument, unless anyone has a better idea.
Update: Reader bogdanb points out that there are very simple logical "possibilities" that would result in there being lots of observers, such as the possibility that 1+1= some suitably huge number, such as 10^^^^^^^^10. You know there is an observer, you, and that there is another observer, your friend, and therefore there are 10^^^^^^^^10 observers according to this "logical possibility". If you reason according to SIA, you might end up doubting elementary arithmetical truths.
I believe this argument. One way of thinking about it, that may seem less counter-intuitive, is to imagine that the RH is one of a large group of math proposition each of which (for arguments sake) we give a subjective probability of 50% of being true.
Then if one of the propositions X is selected at random, and we know that X implies 10^500 times more observers. In this situation, we are more in a situation akin to the old fashioned SIA situation; we expect that roughly half the propositions will be true, so standard proability is all we need to state SIA.
But now suppose we find out that X is the RH, and that we don't get any extra information about the likely truth of the RH. By Bayseian rules, if we wish to shift our proability away from standard SIA (towards, for example, making the large universe less likely), then there must be some mathematical proposition that, if we used it instead of the RH would shift the probability the other way (towards making the large universe more likely). Since all we know about these propositions is that they all have a subjective proability of 50% of being true, making them interchangeable, this cannot be the case.
Take home message: SIA for uncertainty over empirical facts works only iff SIA for uncertainty over logical facts does as well.