Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simulations would simply lack fidelity, or the +1 society running us would go "whoops, that one is spinning up exponentially" and shut us down. If you really think you are in a simulated society, things like this would be tantamount to suicide...
Why? The terminal point is creation of FAI. But they wouldn't shut down the humans of the simulation; that would defeat the whole point of the thing.
I don't find the Doomsday argument compelling, simply because it assumes something is not the case ("we are in the first few percent of humans born") just because it is improbable.
...so you are arguing that probability doesn't mean anything? Something that will happen in 99.99% of universes can be safely assumed to occur in ours.
...so you are arguing that probability doesn't mean anything? Something that will happen in 99.99% of universes can be safely assumed to occur in ours.
Absent other information.
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?