AlexLundborg comments on Open Thread March 28 - April 3 , 2016 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
You could get really random number using cosmic rays of remote quasars, but I think that true quantum randomness is not necessary in this case. Big world immortality could work anyway - there are many other earthes in the multiverse.
Superposition is also may be not necessary for QI to work. It may be useful if you want to make some kind of interaction between different outcomes, but it seems impossible for such large system.
The main thing which I would worry about, if I try to use QI to survive x-risks, is that the death of all civilization should be momentary. If it is not momentary, where will be a period of time when observers will know that given risk has began but they didn't die yet, and so they will be unable to "jump" to another outcome. Only false vacuum decay provide momentary death for everybody (but not exact simultaneous given Earth size of 12 000 km and limited speed of light).
Another option of using QI to survive x-risks is see that me-observer must survive any x-risks, if QI is true. So any x-risks will have at least one survivor, one wounded man on empty planet.
We could use this effect to ensure that a group of people survive, if we connect me-observer with that group by necessary condition of dying together. For example, we all locked in the submarine full of explosives. In most of the worlds there are two outcomes: all the crew of the submarine dies, or everybody survive.
If I am in such submarine, and QI works, we - all the crew - probably survive any x-risk.
In short the idea is to convert slow x-risks into a momentary catastrophe for a group of people. The same way we may use QI personally to fight slow dying from aging, if we sign up for cryonics.
Whether or not momentary death is necessary for multiverse immortality depends on what view of personal identity is correct. According to empty individualism, it should not matter that you know you will die, you will still "survive" while not remember having died as if that memory was erased.
I think the point is that if extinction is not immediate, then the whole civilisation can't exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.
It doesn't really matter if it's immediate according to empty individualism. Instead the chance of survival in the branches where you try to die must be much lower than the chance of choosing that world.
You can never make a perfect doomsday device, because all kinds of things could happen to make it fail at the moment or during preparation. Even if it operates immediately.