No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
Then perhaps I simply do not understand the proposal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).
I am also foggy on terminology - DA and FAI and so on. I don't suppose there's a glossary around. Ok - DA is "Doomsday Argument" from the thread context (which seems silly to me - the SSA seems to be wrong on the face of it, which then invalidates DA).
Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).
I'm not sure that you can avoid picking it up, just by being on this site. http://www.yudkowsky.net/singularity/ai-risk/
which seems silly to me - the SSA seems to be wrong on the face of it
You clearly know something I don't.
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?