The map of quantum (big world) immortality
The main idea of quantum (the name "big world immortality" may be better) is that if I die, I will continue to exist in another branch of the world, where I will not die in the same situation.
This map is not intended to cover all known topics about QI, so I need to clarify my position.
I think that QI may work, but I put it as Plan D for achieving immortality, after life extension(A), cryonics(B) and digital immortality(C). All plans are here.
I also think that it may be proved experimentally, namely that if I turn 120 years or will be only survivor in plane crash I will assign higher probability to it. (But you should not try to prove it before as you will get this information for free in next 100 years.)
There is also nothing quantum in quantum immortality, because it may work in very large non-quantum world, if it is large enough to have my copies. It was also discussed here: Shock level 5: Big worlds and modal realism.
There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality - record more about your self and future FAI will resurrect you, and QI rises chances of it.)
I do not buy "measure" objection. It said that one should care only about his "measure of existence", that is the number of all branches there he exists, and if this number diminish, he is almost dead. But if we take an example of a book, it still exist until at least one copy of it exist. We also can't measure the measure, because it is not clear how to count branches in infinite universe.
I also don't buy ethical objection that QI may lead unstable person to suicide and so we should claim that QI is false. I think that rational understanding of QI is that it or not work, or will result in severe injuries. The idea of soul existence may result in much stronger temptation to suicide as it at least promise another better world, but I never heard that it was hidden because it may result in suicide. Religions try to stop suicide (which is logical in their premises) by adding additional rule against it. So, QI itself is not promoting suicide and personal instability may be the main course of suicidal ideation.
I also think that it is nothing extraordinary in QI idea, and it adds up to normality (in immediate surroundings). We all already witness to examples of similar ideas. That is the anthropic principle and the fact that we found ourselves on habitable planet while most planets are dead. And the fact that I was born, but not my billions potential siblings. Survivalship bias could explain finding one self in very improbable conditions and QI is the same idea projected in the future.
The possibility of big world immortality depends on size of the world and of nature of “I”, that is the personal identity problem solution. This table show how big world immortality depends on these two variables. YES means that big world immortality will work, NO means that it will not work.
Both variables are unknown to us currently. Simply speaking, QI will not work if (actually existing) world is small or if personal identity is very fragile.
My apriori position is that quantum multiverse and very big universe are both true, and that information is all you need for personal identity. This position is most scientific one, as it correlate with current common knowledge about Universe and mind. If I could bet on theories, I would bet on it 50 per cent, and 50 per cent on all other combination of theories.
Even in this case QI may not work. It may work technically, but become unmeasurable, if my mind will suffer so much damage that it will be unable to understand that it works. In this case it will be completely useless, the same way as survival of atoms from which my body is composed is meaningless. But this maybe objected, if we say that only my copies that remember that me is me should be counted (and such copies will surely exist).
From practical point of view QI may help if everything failed, but we can't count on it as it completely unpredictable. QI should be considered only in context of other world-changing ideas, that is simulation argument, doomsday argument, future strong AI.

Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (75)
There might also be situations where surviving is not just ridiculously unlikely, but simply mathematically impossible. That is, I assume that not everything is possible through quantum effects? I'm not a physicist. I mean, what quantum effects would it take to have your body live forever? Are they really possible?
And I have serious doubts that surviving a plane crash or not could be due to quantum effects, but I suppose it could simply be incredibly unlikely. I fear that people might be confusing "possible worlds" in the subjective Bayesian sense and in the quantum many-worlds sense.
In Soviet Union a woman survived mid-air frontal planes collision - her chair rotated together with part of the wing and failed into a forrest.
But the main idea here is that the same "me" may exist in different worlds - in one I am in a plane in the other I am in plane simulator. I will survive in the second one.
My point was that QM is probabilistic only at the smallest level, for example in the Schrödinger's cat thought experiment. I don't think surviving a plane crash is ontologically probabilistic, unless of course the crash depends on some sort of radioactive decay or something! You can't make it so that you survive the plane crash without completely changing the prior causal networks... up until the beginning of you universe. Maybe there could be a way to very slightly change one of the universal constants so that nothing changes except that you survive, but I seriously doubt it.
As turchin said, it's possible that the person in the plane accident exists in both a "real world" and a simulation, and will survive in the latter. Or they quantum tunnel to ground level before the plane crashes (as far as I know, this has an incredibly small but non-zero probability of occurring, although I'm not a physicist either). Or they're resurrected by somebody, perhaps trillions of years after the crash. And so forth.
In fact so called QI does not depends of QM at all. All it needs is big world in Tegmark style.
This means that many earths exist in the universe and they are different, but me is the same on them. On one earth the planes kill everybody and on the other there will be survivors.
I have never seen it adequately explained exactly what "QI is true" or "QI works" is supposed to mean.
If it just means (as, e.g., in the first paragraph here) "in any situation where it seems like I die, there are branches where I somehow don't": OK, but why is that interesting? I mean, why is it more interesting than "in any situation where it seems like I die, there are very-low-probability ways for it not to happen"?
Whatever intuitions you have for how you should feel about a super-duper-low-probability event, you should apply them equally to a super-duper-low-measure branch, because these are the same thing.
QI predict result of a physical experiment. It said that if there are two outcomes, 1 and 2, and I am an observer of this experiment and in case of outcome 2 I die, than I will measure outcome 1 with 100 per cent probability, no matter what was priors of outcome 1 and 2.
This definition doesn't depend on any "esoteric" ideas about "I" and personal identity. Observer here coud be a turing-computer program.
For example, if we run 1 000 000 copies of a program which will be terminated if dice (each for any instance of the program) falls odd (1,3,5) and not terminated if (2,4,6), than the program should expect that it will measure only 2, 4 or 6 with one third probability each and that after the dices were rolled only 500 000 copies of the program survive.
The same is true on any interpretation of QM, and even without QM.
If you are guaranteed to die when the outcome is 2, then every outcome you experience will be outcome 1. Everyone should agree with that. It has nothing to do with any special feature of quantum mechanics. It doesn't rely on "many worlds" or anything.
Yes, QI is not about quantum in fact, it is just about big world, that is why I prefer to name it "many world immortality" or big world immortality. To experience outcome 1 I just need the actual existence of my copies.
What is true on any interpretation is that if one experiences any outcome at all, they will with 100 percent probability experience 1. Only with QI can they be 100 percent certain of actually experiencing it.
Yes, QI said that there always will be my copies that will actually experience outcome 1, and there is no difference between me and copy, so it will be me.
That's just anthropics: you will not observe the world in which you do not exist.
As I mentioned in another comment, I still don't see how this leads to you existing forever.
You will not, actually, measure outcome 1 with 100% probability, since you may well die before doing so.
Lets assume that 1 million my copies exist and they play russian roulette every second with two equal outcomes. Next second there will be 500 000 my copies who experience outcome 1 and so on for next 20 second. So one copy of me will survive 20 rounds of roulette and will feel itself immortal.
Many world immortality is based on this experiment with two premises: that there are infinitely many my copies (or they created after each round) and that there is no existential difference between the copies. In this case roulette will always fail.
I put all different outcomes of these two premises in the map in the opening post, seems strange that no body sees it )) If there is no infinite number of my copies and or if copies are not equal, big world immortality doesn't work.
QI messes up the subjective probabilities. If there is simply one world and one "copy" of you, and you have a very, very small probability of surviving some event, you can be practically certain that you won't live to eat breakfast the next day. However, if there are very many copies of you and QI works, you can be certain that you will live. It completely changes what you should, subjectively, expect to experience in such a situation.
In no case should you expect to experience not living until the next day. That cannot be experienced, whether QI is true or not.
Correct, but in some cases I could expect to not experience anything.
What exactly do you mean by "you" here? (I think maybe different things in different cases.)
Maybe I should try to get rid of that word. So let's suppose we have a conscious observer in a situation like that, so that they have a very, very large probability to die soon and a small but non-zero probability to survive. Now, if there is only one world that doesn't split and there are no copies of that observer, i.e. other observers who have a conscious experience identical or very similar to that of our original observer, then that observer should expect that i) the only outcome that they may experience is one in which they survive, but that ii) most likely they will not experience any outcome.
Whereas given MWI and QI, there will be an observer (numerous such observers, actually) who will rememeber being the original observer and feel like they are the same observer, with a certainty.
So "you" kind of means "someone who feels like he/she/it is you".
But if you hold "you X" to be true merely because someone who feels like they're you does X, without regard for how plentiful those someones are across the multiverse (or perhaps just that part of it that can be considered the future of the-you-I'm-talking-to, or something) then you're going to have trouble preferring a 1% chance of death (or pain or poverty or whatever) to a 99% chance. I think this indicates that that's a bad way to use the language.
I'm not sure I entirely get what you're saying; but basically, yes, I can see trouble there.
But I think that, at its core, the point of QI is just to say that given MWI, conscious observers should expect to subjectively exist forever, and in that it differs from our normal intuition which is that without extra effort like signing up for cryonics, we should be pretty certain that we'll die at some point and no longer exist after that. I'm not sure that all this talk about identity exactly hits the mark, although it's relevant in the sense that I'm hopeful that somebody manages to show me why QI isn't as bad as it seems to be.
QI or no QI, we should believe the following two things.
In every outcome I will ever get to experience, I will still be alive.
In the vast majority of outcomes 200 years from now (assuming no big medical breakthroughs etc.), measured in any terms that aren't defined by my experiences, I will be dead.
What QI mostly seems to add to this is some (questionable) definitions of words like "you", and really not much else.
I agree with qmotus that something is being added, not so much by QI, as by the many worlds interpretation. There is certainly a difference between "there will be only one outcome" and "all possible outcomes will happen."
If we think all possible outcomes will happen, and if you assume that "200 years from now, I will still be alive," is a possible outcome, it follows from your #1 that I will experience being alive 200 years from now. This isn't a question of how we define "I" - it is true on any definition, given that the premises use the same definition. (This is not to deny that I will also be dead -- that follows as well.)
If only one possible outcome will happen, then very likely 200 years from now, I will not experience being alive.
So if QI adds anything to MWI, it would be that "200 years from now, I will still be alive," and the like, are possible outcomes.
There's no observable difference between them. In particular, "happen" here has to include "happen on branches inaccessible to us", which means that a lot of the intuitions we've developed for how we should feel about something "happening" or not "happening" need to be treated with extreme caution.
OK. But the plausibility -- even on MWI -- of (1) "all possible outcomes will happen" plus (2) "it is possible that 200 years from now, I will still be alive" depends on either an unusual meaning for "will happen" or an unusual meaning for "I" (or of course both).
Maybe the right way to put it is this. MWI turns "ordinary" uncertainty (not knowing how the world is or will be) into indexical uncertainty (not knowing where in the world "I" will be). If you accept MWI, then you can take something like "X will happen" to mean "I will be in a branch where X happens" (in which case you're only entitled to say it when X happens on all branches, or at least a good enough approximation to that) or to mean "there will be a branch where X happens" (in which case you shouldn't feel about that in the same way as you feel about things definitely happening in the usual sense).
So: yes, on some branch I will experience being alive 200 years from now; this indeed follows from MWI. But to go from there to saying flatly "I will experience being alive 200 years from now" you need to be using "I will ..." locutions in a very nonstandard manner. If your employer asks "Will you embezzle all our money?" and your intentions are honest, you will probably not answer "yes" even though presumably there's some very low-measure portion of the multiverse where for some reason you set out to do so and succeed.
Whether that nonstandard usage is a matter of redefining "I" (so it applies equally to every possible continuation of present-you, however low its measure) or "will" (so it applies equally to every possible future, however low its measure) is up to you. But as soon as you say "I will experience being alive 200 years from now" you are speaking a different language from the one you speak when you say "I will not embezzle all your money". The latter is still a useful thing to be able to say, and I suggest that it's better not to redefine our language so that "I will" stops being usable to distinguish large-measure futures from tiny-measure futures.
Unless they were already possible outcomes without MWI, they are not possible outcomes with MWI (whether QI or no QI).
What MWI adds is that in a particular sense they are not merely possible outcomes but certain outcomes. But note that the thing that MWI makes (so far as we know) a certain outcome is not what we normally express by "in 200 years I will still be alive".
You raise a valid point, which makes me think that our language may simply be inadequate to describe living in many worlds. Because both "yes" and "no" seem to me to be valid answers to the question "will you embezzle all our money".
I still don't think that it refutes QI, though. Take an observer at some moment: looking towards the future and ignoring the branches where they don't exist, they will see that every branch will lead to them living to be infinitely old; but every branch doesn't lead to them embezzling their employer's money.
Do you mean that it's not certain because of the identity considerations presented, or that MWI doesn't even say that it's necessarily true in some branch?
I would say that QI (actually, MWI) adds a third thing, which is that "I will experience every outcome where I'm alive", but it seems that I'm not able to communicate my points very effectively here.
How does MWI do that? On the face of it, MWI says nothing about experience, so how do you get that third thing from MWI? (I think you'll need to do it by adding questionable word definitions, assumptions about personal identity, etc. But I'm willing to be shown I'm wrong!)
I think this post by entirelyuseless answers your question quite well, so if you're still puzzled by this, we can continue there. Also, I don't see how QI depends on any additional weird assumptions. After all, you're using the word "experience" in your list of two points without defining it exactly. I don't see why it's necessary to define it either: a conscious experience is most likely simply a computational thing with a physical basis, and MWI and these other big world scenarios essentially say that all physical states (that are not prohibited by the laws of physics) happen somewhere.
You use the word "you" to refer not to a single something, but rather to a vast rapidly expanding field of different consciousnesses united only by the fact that long time ago they branched off from a single point -- right?
Yes, I'm assuming a sort of patternist viewpoint here. Although I don't think that it's particularly important, whatever one's preferred theory of identity is, it remains the case that given QI, there will be a "you" (or multiple "you"s) in that scenario who will feel like they are the same consciousness as the "you" at the point of branching.
Well, not quite, in that scenario I will feel that I am one of a multitude of different "I"s spawned from a branching point. Kinda like the relationship between you and your (first-, second-, third-, etc.) cousins.
An important property of self-identity is uniqueness.
Will the person before the branching then simply be another cousin to you? If so, do you feel like the person you woke up as tomorrow morning was not in fact you, but yet another cousin of yours?
It depends on whether I know/believe that I'm the only one who woke up this morning with memories of my yesterday's self, or a whole bunch of people/consciousnesses woke up this morning with memories of my yesterday's self.
The self before the branching would be my ancestor who begat a lot of offspring of which I'm one.
One -> one is a rather different situation from one -> many.
Fair enough, I just find it extremely difficult to think like that in practice (it's a bit easier if I look back at myself from ten years ago or thirty years to the future).
Well, under MWI there are people who "are" you in sense of having been born to the same mother on the same day, but their branch diverged early on so that they are very unlike you now. And still they are also "you".
True, and as I said, I feel like those people are indeed closer to cousins. But when we're talking about life and death situations such as those that QI applies to, the "I's after branching" are experientially so close to me that I do think that it's more about immortality for me than about me just having a bunch of cousins.
I am a bit confused. If we are living in a Quantum Immortality world, why don't we see any 1000-year-old people around?
Only one observer is immortal in one world, you can'y meet others.
It is the same like in lottery with one prize. If you win, other has lost. But Anthropic principle metaphor is more correct. You don't meet other immortals, the same way as Fermi paradox work and we don't meet other habitable planets. Because winning is so improbable that we could find ourselves on habitable planet only because observation selection.
I understand QI as related to the Anthropic Principal. The point is that you will tend to find yourself observing things, which implies that there is an effectively immortal version of you somewhere in probability space. It doesn't require that any Quantum Immortals coexist in the same world.
Of course, we'd be far more likely to continue observing things in a world where immortality is already available than in one where it is not, but since we're not in that world, it doesn't seem too outlandish to give a little weight to the idea that the absence of Quantum Immortals is a precondition to being a Quantum Immortal. I have no idea how that makes sense, though. One could construct fantastic hypotheticals about eventually encountering an alien race intent on wiping out immortals, or some Highlander-esque shenanigans, but more likely is that immortality is just hard and not that many people can win the QI lottery in a single world. (Or even that we happen to be living at the time when immortality is attainable.)
Incidentally (or frustratingly), this gets us back into "it's all part of the divine plan" territory. Why do you go through problem X? Because if you didn't, you would eventually die forever.
I am now curious as to whether or not there are books that combine Quantum Immortality with religious eschitology[sic]. Just wait for the Quantum Messiah to invent a world-hopping ability to rescue everyone who has ever lived from their own personal eternity (which is probably a Quantum Hell by that point), and bring them to Quantum Heaven.
(I was not thinking Quantum Jesus would be an AI, but sure; why not? Now we have the Universal Reconciliation version of straw Singularitarianism.)
The Anthropic Principle does not imply immortality. It basically says that you will not observe a world in which you don't exist, but it says nothing about you continuing to exist forever in time.
Because it's incredibly unlikely for anyone to live to be a thousand years old and equally unlikely whether MWI is true or not. There are worlds where we see maybe one such person, of course, but this just isn't one of them (unless you think that, say, Stephen Hawking keeping on living against all odds is evidence of QI).
Under QI, doesn't everyone live to be a thousand years old and more?
Human longevity looks to have a pretty hard cut-off at the moment. We don't see anyone 150 years old, too.
Think of it like this: MWI makes the exact same predictions regarding observations as the Copenhagen interpretation, it's just that observations that are incredibly unlikely to ever happen in CI happen in a very small portion of all existing worlds in MWI. QI does not change this, which means that everybody does live to 1000 in a small minority of worlds, but in most worlds they die in their 120s at the latest. Therefore you're very unlikely to see anyone else besides yourself living miraculously long.
I don't believe the Copenhagen interpretation expects me to live forever.
Out of curiosity, have there been attempts to estimate the "branching speed" under MWI? How many worlds with slightly different copies of me will exist in 1 second?
It does not. There's the difference. But if someone looks at you from the outside, the probability with which they will see you living or dying is not affected by quantum interpretations.
As to your second question, I don't know. QI as it is presented here is based on a pretty simplistic version of MWI, I suppose, one which may have flaws. I hope that's the cased, actually.
The problem with Quantum Immortality is that it is a pretty horrible scenario. That's not an argument against it being true of course, but it's an argument for hoping it's not true.
Let's assume QI is true. If I walk under a bus tomorrow, I won't experience universes where I die, so I'll only experience miraculously surviving the accident. That sounds good.
But here's where the nightmare starts. Dying is not a binary process. There'll be many more universes where I survive with serious injuries then universes where I survive without injury. Eventually I'll grow old. There'll be some universes where by random quantum fluctuations that miraculously never happens, but in the overwhelming majority of them I'll grow old and weak. And then I won't die. In fact I wouldn't even be able to die if I wanted to. I could decide to commit suicide, but I'll only ever experience those universes where for some reason I chose not to go through with it (or something prevented me from going through with it).
It's the ultimate horror scenario. Forced immortality, but without youth or health.
If QI is true having kids would be the ultimate crime. If QI is true the only ethical course of action would be to pour all humanity's resources into developing an ASI and program it to sterilize the universe. That won't end the nightmare, there'll always be universes where we fail to build such an ASI, but at least it will reduce the measure of suffering.
We believe we may have found a solution to degenerate QI, via simulationism under acausal trade. The basic gist of it is that continuations of the mind-pattern after naturally lethal events could be overwhelminly more frequent under benevolent simulationism than continuations arising from improbable quantum physical outcomes, and, if many of the agents in the multiverse operate under an introspective decision theory, pact-simulist continuations already are overwhelmingly more frequent.
In its natural form QI is bad, but if we add cryonics, they will help each other.
If you go under the bus you now have three outcomes: you die, you are cryopreserved and lately resurrected and you are badly injured for eternity. QI prevent first one.
So you will be or cryopreserved or badly injured and survive for eternity. While both things have very small probability, cryopreservation may overweight longterm injury. And it certainly overweight a chance that you will live until 120 years old.
So if you do not want to suffer for eternity , you need to sign up for cryonics ))))
If we go deeper, we maybe surprised to find our selves in the world that prevent us from very improbable life of no dying old man, because we live in a very short time of human history where cryonics is known.
It may be explained (speculatively) that if you are randomly chosen from all possible immortals, you will find yourself in the class with highest measure.
It means that that you should expect no degradation, but ascending, may be by merging with Strong AI. It may sound wild, but I was surprised that I was not only one who came to the same conclusion, as when I was in MIRI last fall one guy had the same ideas (I forget his name).
In short it may be explained in following way: from all humans who will be immortal the biggest part will be the ones who merge with AI and the smallest one will be those who survive as very old man thanks to random fluctuation.
Sure, cryonics would help. But it wouldn't be more than a drop in the ocean. If QI is true, and cryonics is theoretically possible, then 500 years from now there'll be 3 kinds of universes: 1) Universes where I'm dead, either because cryonics didn't pan out (perhaps society collapsed), or because for some reason I wasn't revived. 2) Universes where I'm alive thanks to cryonics and 3) Universes where I'm alive due to quantum fluctuations 'miraculously' keeping me alive.
Clearly the measure of the 3rd kind of universe will be very very small compared to the other two. And since I don't experience the first, that means that subjectively I'm overwhelmingly likely to experience being alive thanks to cryonics. And in most of those universes I'm probably healthy and happy. So that sounds good.
But quantum immortality implies forced immortality forever. No way to escape, no way to permanently shut yourself down once you get bored with life. No way to die even after the heath death of the universe.
No matter how good the few trillion years before that will be, the end result will be floating around as an isolated mind in an empty universe, kept alive by random quantum fluctuations in an increasingly small measure of all universes that will nevertheless always have subjective measure of 1, for literally ever.
Now personally I don't think QI is very likely. In fact I consider it extremely unlikely. All I'm saying is that if it were true, that'd be a nightmare.
Why do you think that it's unlikely?
Update: There are many ways how we could survive the end of the universe (see my map), so the endless emptiness is not necessary option. http://lesswrong.com/lw/mfa/a_roadmap_how_to_survive_the_end_of_the_universe/
While I understand your concerns, I think that during next trillion years you will be able to find the ways to solve the problem, and I even able to suggest some of the solutions now.
A trillion years from now you will be very powerful AI which also knows for sure that QI works.
The simplest solution is circle time. In in you are immortal, but your experience are repeatings. If they are pleasant, there will be no sufferings. More complex forms of time are also possible, so the "linear time trap" is just a result of our lack of imagination. Circular time probably result naturally from QI, because any human has non zero probability to transform in any other human, so you will circle in random patterns the space of all possible minds. (It also solves identity problem by the way - everybody are identical, but with different time for transformation.)
You could edit your brain in the way that it would enjoy empty eternity, so no sufferings. Anyway you may lost part of your long-term memory, so you may don't know your real age. And in most QI branches it will happen naturally.
Even if suffering (not very strong and painful) is real after trillion years from now, it may be good deal to agree on QI now, because of discounting effect. I prefer to live trillion years than to die in strong suffering in next 20.
Maybe the strong AI will prove that it is able to create fun quicker than use this fun, so it will always has something to do, no matter how much linear time has gone. It also may create many levels avatar worlds (simulations) where avatars will not remember their real age (and we are probably inside such simulation).
I spent 25 years to come to these ideas (from the summer 1990 than I get the idea of QI) so in next trillion years I will be able to get better ideas I hope.
So if in the next few months a planet-sized rock comes out of deep space at high velocity and slams into Earth, in which Everett branch will you survive? Which quantum fluctuation will save you?
Yes, in all branches where i am in simulation and wake up. The same "me" may be in different worlds.
Or in the universe where aliens will save me just a second before the impact.
Or I will be resurrected by another aliens based on my footprint in radiowaves.
There is no possible issue that cannot be resolved by an answer "you are in a simulation and the simulation just changed its rules".
Given a big world, we live in a simulation and we don't; we're simply unable to self-locate ourselves in the set of all identical copies. That's one of the main points of of the post about modal realism that turchin linked to in the original post. Failure to see how this leads to survival in every scenario is due to not thinking enough about it.
A big world was presented here as one of the premises of the whole argument, so if you think that the conclusions drawn here are ridiculous, you should probably attack that premise. I actually think physicists and philosophers would be rather more reluctant to bite all the bullets shot at them and think of alternatives if they realized what implications theories like MWI and inflation have, and care more about valid criticisms such that we have no accepted solution to the measure problem (although it seems that most physicists think that it can be solved without giving up the multiverse).
The one where events happened exactly the same - and then you wake up.
Uncertainty doesn't happen in the universe, after all. The universe isn't uncertain about what it is; the observer is uncertain about what universe it is in.
That's a bit too deep for me.
This is something I've thought about too, although I've been a bit reluctant to write about it publicly. But on the other hand QI seems quite likely to be true, so I guess we should make up our minds about it.
I've contemplated writing a post about the same subject of "big world immortality" (could be call it BWI for short?) myself, but mostly focusing on this part: "There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality - record more about your self and future FAI will resurrect you, and QI rises chances of it.)"
It seems to me that we should be very pessimistic about the future because of QI/BWI. After all, what guarantee is there that you will wake up in a friendly world, or that the AI who resurrects is friendly? Should we be worried about this? What could we do to increase the likelihood that we'll find ourselves in a comfortable future?
I'm very confused about this myself. It seems to me, too, that there's a significant chance that QI is true, but there are objections, of course: the inventor of the mathematical universe hypothesis, Max Tegmark, disputes it himself in his 2014 book, arguing that "infinitely big" and "infinitely small" don't actually exist and QI will therefore not work. I have no idea if this makes sense or not. There are also attempts to rid physics of somewhat related ideas such as Boltzmann brains.
It's even more confusing since I'm not really interested in immortality myself. Normally I would be mildly enthusiastic about "ordinary" ways of life extension, but avoid things such as cryonics. With QI, I don't know. Now that this post is here, I hope people will share their thoughts.
If I will be resurrected, I expect that the AI that will do it will be with probability 90 per cent friendly. Why UFAI will be interested to resurrect me? Just to punish? Or to test his ideas about the end of the world in a simulation? In this case it will simulate me from my birth.
Anyway signing to cryonics is the best way to escape from eternal suffering of bad quantum immortality in very old body.
I don't understand Tegmark objection. We don't need infinite world for BWI, just very big one, big enough to have many my copies.
BWI will help me to survive if I am a Bolzmann brain now. I will die next moment, but in another world, there I am part of a real world, I will continue to exist, so the same logic as in BWI may be applied.
I still think that BWI is too speculative to be use in actual decision making. I also think that ones enthusiasm about death prevention may depend on urgency of situation: if there is fire in a house everybody in it will be very enthusiastic to save their life.
Maybe; there's a certain scenario, for instance, that for a time wasn't allowed to be mentioned on LW (not anymore, I suppose). In any case, the ratio of UFAIs to FAIs is also important; even if few UFAIs care about resurrecting you, they can be much more numerous than FAIs.
This is actually what I would suppose to be most common. In which case we're back to the enormously prolonged old age scenario, I suppose.
Basically, I think you're right. Either Tegmark hasn't thought about this enough, or he believes that it would shrink the size of our big world enormously. Kudos to him for devoting a chapter of a popular science book for the subject, though.
Why do you think that it's so speculative? MWI has a lot of support in LW and among people working on quantum foundations; cosmic inflation has basically universal acceptance among physicists (and alternatives, such as Steinhardt's epkyrotic cosmology, have basically the same implications in this regard); string theory is very plausible; Tegmark's mathematical universe is what I would call speculative, but even it makes a lot of sense; and patternism, the other necessary ingredient, is again almost universally accepted on LW.
Probably. But as humans we're basically built to strive to survive in a situation like that, meaning that their jugdment is likely pretty severely impaired.
Now we could speak about RB for free. I mostly think that mild version is true, that is good people will be more rewarded, but not punishment or sufferings. I know some people who independently come to idea that future AI will reward them. For me, I don't afraid any version of RB as I did a lot to promote ideas of AI safety.
Still don't get Tegmark's idea, may be need to go to his book.
For example, we could live in a simulation with afterlife, and suicide in it is punished. If we strongly believe in BWI we could build universal fulfilment of desires machine. Just connect any desired outcome with bomb, so it explodes if our goal is not reach. But I am sceptical about all believed in general, which is probably also shared idea in LW )) I will not risk permanent injury or death if I have chance to survive without it. But i could imagine situation where I will change my mind, if real danger overweight my uncertainty about BWI.
For example if one have cancer, he may prefer an operation which has 20 per cent positive outcome to chemo with 40 per cent positive outcome, but slow and painful decline in case of failure. In this case BWI gives him large chance to become completely illness free.
This thread is not about values, but I think that values exist only inside human beings. Abstract rational agent may have no values at all, because it may prove that any value is just logical mistake.