CarlShulman comments on Shock Level 5: Big Worlds and Modal Realism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (140)
Does this theory really alter the probability that your next chocolate bar will turn into a hamster? After all, if there were only one of you, maybe there's a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster. If there are a trillion of you, and one of those trillion is in such a simulation, and your subjective experience has an equal chance of continuing down any branch, then the probability of the bar turning into the hamster is still one in a trillion. Although I've never seen a proof, intuitively you'd expect those two probabilities to be the same, or at least not be able to predict how they differ.
It all adds up to normality...except that this takes a lot of the oomph out of the project to reduce existential risk. Saving all humanity from destruction makes a much better motivator for me than reducing the percentage of branches of humanity that end in destruction by an insignificaaEEEEGH MY KEYBOARD JUST TURNED INTO A BADGER!!11asdaghf
And just what does that mean?
I spent a lot of time in the late 90s trying to work out a coherent system of thinking about probabilities that involved things like "your subjective experience has an equal chance of continuing down any branch" but could not make it work out.
Eventually I gave up and went down the road of UDASSA and then UDT, but "your subjective experience has an equal chance of continuing down any branch" seems to be the natural first thing that someone would think of when they think about probabilities in the context of multiple copies/branches. I wish there is a simple and convincing argument why thinking about probabilities this way doesn't work, so people don't spend too much time on this step before they move on.
The implied difference between making N copies straight away, and making two copies and then making N-1 copies of one of them, might be a simple convincing argument that something really odd is going on.
It doesn't? If I flip a fair coin, I can think of the outcomes as "my subjective experience goes down the branch where heads comes up" and "my subjective experience goes down the branch where tails comes up", and the principle works.
Maybe nothing - maybe the fundamental unit of conscious experience is the observer-moment and that continuity of experience is an illusion - but the consensus on this site seems to be that it's worth talking about in situations like eg quantum suicide or simulation.
Maybe the inferential step would work better than the observer moment?