Do you think you won't awaken in a room with no in the envelope?
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
I find this model implausible. Is there any evidence I can update on?
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
But this world I described is (or can be) completely deterministic; how can you be uncertain of what will happen? I understand how I can be subjectively uncertain due to self-locating uncertainty, but there should be no possible objective uncertainty in a deterministic world. The only out I see if if you think consciousness requires non-deterministic physical processes.
...I find this model implausible. Is there any evidence I can update on?
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?