MWI: we don't know what is that works, but we can tell if something doesn't work. Probabilities don't seem to work out if you just count distinct observers. Plus, the number of distinct observers grows very rapidly with time, so you get extreme case of doomsday paradox. If you aren't just counting distinct observers but count copies twice then your probabilities could as well depend to e.g. thickness of wires in the computer, not just the raw number of simulated realities.
Furthermore, more significantly, under MWI it is not even clear what first two statements could even mean.
I think you are misunderstanding the SA, which is surprising since it's formally pretty simple.
We are discussing Nick Bostrom, and I take the http://en.wikipedia.org/wiki/Nick_Bostrom to be at least somewhat representative of his contribution to simulation argument
The trilemma as stated is:
No civilization will reach a level of technological maturity capable of producing simulated realities.
No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
Any entities with our general set of experiences are almost certainly living in a simulation.
I assumed that the last statement is to be taken as 'we should expect to be in a sim if first two conditions are false and given our general set of experiences', by assumption of at least rudimentary relevance of this trilemma.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we'd detect a god speaking to us every day).
Furthermore, the distinction between perfect simulator and reality strikes me as nonsensical. Until there is a measurement that we are in a simulation, we may most sensibly assume we are in both (this time merely drawing inspiration from the sort of intuitions we might have had if we believed in MWI). The probability of measurement that we are in a simulation, well that has an exceptionally good chance of being a much more complicated matter than assumed.
edit: To clarify, my point is that even putting this sort of stuff in words or equations is a great example of false precision that Nick Szabo complains about. Too many assumptions have to be made without noticing that those assumptions are made, for the statements to have meaning at all.
Everyone looks silly from 100 years on. That's not a useful point to make.
Those who aren't grossly wrong (Newton for example) don't look as silly as the silly I am speaking of.
Furthermore, more significantly, under MWI it is not even clear what first two statements could even mean.
Then we can pardon Bostrom for not taking them into account.
I take the http://en.wikipedia.org/wiki/Nick_Bostrom to be at least somewhat representative of his contribution to simulation argument
Wikipedia is pretty bad on philosophy (the SEP is much better), and in this case, there's no reason not to read Bostrom's original paper and the correction: he writes clearly, and they are readily available on his website.
...In such case there is the fourt
Nick Szabo on acting on extremely long odds with claimed high payoffs:
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds.
Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post:
In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.