Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
I assume that your idea is to prevent doomsday or make it less likely. If not, why bother with all these simulations?
Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
I am not the first Lesswronger to think of a causality-evading idea, btw.
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?