OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
I hope you realize that you're just moving the problem into determining which one is "your" room, considering neither room had any of you thinking in it until after one was killed.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from -- if it was built as a clone, then, well, it is a clone.
The root of our disagreement then seems to be this "continuous" insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis.
I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that.
What arguments/intuitions are causing you to find your model plausible?
I find a model plausible if it isn't contradicted by evidence and matches my intuitions.
My model doesn't imply discrete time; I don't think I can precisely explain why, because I basically don't know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I'm uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?