I read the first link, and to me it seems that the author actually stumbles upon the right answer in the middle of the paper, only to dismiss it immediately with "we have no good way to justify it" and proceed towards things that make less sense. I am talking about what he calls the "intensity rule" in the paper.
Assuming a non-collapse interpretation, the entire idea is that literally everything happens all the time, because every particle has a non-zero amplitude at every place, but it all adds up to normality anyway, because what matters is the actual value of the amplitude, not just the fact whether it is zero or non-zero. (Theoretically, epsilon is not zero. Practically, the difference between zero and epsilon is epsilon.) Outcomes with larger amplitudes are the normal ones; the ones we should expect more. Outcomes with epsilon amplitudes are the ones we should only pay epsilon attention to.
It is possible that the furniture in my room will, due to some very unlikely synchronized quantum tunneling, transform into a hungry tiger? Yes, it is theoretically possible. (Both in Copenhagen and many-worlds interpretations, by the way.) How much time should I spend contemplating such possibility? Just by mentioning it, I already spent many orders of magnitude more than would be appropriate.
The paper makes some automatic assumption about time, which I am going to ignore for the moment. Let's assume that, because of quantum immortality, you will be alive 1000000 years from now. Which path is most likely to get you from "here" to "there"?
In any case, some kind of miracle is going to happen. But we should still expect the smallest necessary miracle. In absolute numbers, the chances of "one miracle" and "dozen miracles" are both pretty close to zero, but if we are going to assume that some miracle happened, and normalize the probabilities accordingly, "one miracle" is almost certainly what happened, and the probability of "dozen miracles" remains pretty close to zero even after the normalization. (Assuming the miracles are of comparable size, mutually independent, et cetera.)
Comparing likelihoods of different miracles is, by definition, outside of our usual experience, so I may be wrong here. But it seems to me that the horror scenario envisioned by the author requires too many miracles. (In other words, it seems optimized for shock value, not relative probability.) Suppose that in 10 years you get hit by the train, and by a miracle, a horribly disfigured fragment of you survives in an agony beyond imagination. Okay, technically possible. So, what is going to happen during the following 999990 years? It seems that further surviving in this state would require more miracles than further surviving as a healthy person. (The closer to death you are, the more unlikely it is for you to survive another day, or year.) And both these paths seem to require more miracles than being frozen now, and later resurrected and made forever young using advanced futuristic technology. Even just dying now, and being resurrected 1000000 years later, would require only one miracle, albeit a large one. If you are going to be alive in 1000000 years, you are most likely to get there by a relatively least miraculous path. I am not sure what exactly it is, but being constantly on the verge of death and surviving anyway seems too unlikely (and being frozen and later unfrozen, or uploaded to a computer, seems almost ordinary in comparison).
Now, let's take a bit more timeless perspective here. Let's look at the universe in its entirety. According to quantum immortality, there are you-moments in the arbitrarily distant future. Yes; but most of them are extremely thin. Most of the mass of the you-moments is here, plus or minus a few decades. (Unless there is a lawful process, such as cryonics, that would stretch a part of the mass into the future enough to change the distribution significantly. Still not as far as quantum immortality, which can probably overcome even the death heat of the universe and get so far that the time itself stops making sense.) So, according to anthropic principle, whenever you find yourself existing, you most likely find yourself in the now -- I mean, in your ordinary human lifespan. (Which is, coincidentally, where you happen to find yourself right now, don't you?) There are a few you-moments at a very exotic places, but most of them are here. Most of your life happens before your death; most instances of you experiencing yourself are the boring human experience.
Now, let’s take a bit more timeless perspective here. Let’s look at the universe in its entirety. According to quantum immortality, there are you-moments in the arbitrarily distant future. Yes; but most of them are extremely thin. Most of the mass of the you-moments is here, plus or minus a few decades.
Why does that matter?
Under single universe assumptions, there is no quantum immortality or torment, because low probability things generally don't happen.
Under the single-mind multi-universe view -- where there is one "real" you that switches tracks in pr
...[Note: potential info hazard, but probably good to read if you already read the question.]
[Epistemic status: this stuff is all super speculative due to the nature of the scenarios involved. Based on my understanding of physics, neuroscience, and consciousness, I haven't seen anything that would rule this possibility out.]
All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I'd be okay with that, as I'm not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?
FWIW, I've thought about this a lot and independently came up with and considered all the scenarios mentioned in the Turchin excerpt. It used to really really freak me out, and I believed it on a gut level. Avoiding this kind of outcome was my main motivation for actually getting the insurance for cryonics (the part I was previously cryocrastinating on). However, I now believe that QI is not an s-Risk and don't feel personally worried about the possibility anymore.
One thing to note is that this is a potential problem in any sufficiently large universe, and doesn't depend on a many-worlds style interpretation being correct. Tegmark has a list of various multiverses, which are different and affect what scenarios we might face. I do believe in many-worlds (as a broad category of interpretations) though.
Lots of the comments here seem confused about how this works, so I'll recap. If I'm at the point of death where I'm still conscious, the next moment I'll experience will be (in expectation) whatever conscious state has the highest probability mass in the multiverse, which is also a valid next conscious moment from the previous moment. Note that this next conscious moment is not necessarily in the future of the previous moment. If the multiverse contains no such moments, then we would just die the normal way. If the multiverse includes lots of humans doing ancestor simulations, you potentially could end up in one of those, etc... The key is that out of all conscious beings in the multiverse who feel like this just happened to them, those are (tautologically) the ones having the subjective experience of the next valid conscious moment. And it's valid to care about these potential beings, and is AFAICT the reason I care about my future selves (who do not exist yet) in the normal sense.
Regarding cryonics, it seems like the best way to preserve a significant amount of information about my last conscious moment. To whatever extent information about this is lost, a civilization that cares about this could optimize for likelihood of being a valid next conscious moment. I think this is the main actionable thing you can do for this. Of course, this only passes the buck to the future, since there is still the inevitable heat death of the universe to contend with.
Another thing that seems especially plausible for sudden deaths Aranyosi's 1 scenario. In this case, the highest probability mass next conscious moment will be a moment based on the moment from a few seconds before, but with a "false" memory of having survived a sudden death. This has relatively high probability because people sometimes report having kind of experience when they have a close call. But this again simply passes the buck to the future, where you're most likely to die from a gradual decline.
However, I think that by far, the most likely situation is common to death by aging, illness, or heat death of the universe. At the last moment of consciousness, the only next conscious moments that will be left will be in highly improbable worlds. But which world you are most likely to "wake up" in is still determined by Occam's razor. People seem to imagine that these improbable worlds will be ones where your consciousness remains in a similar state to the one you died in, but I think this is wrong.
Think carefully about what things are actually happening to support a conscious experience. Some minimal set of neurons would need to be kept functional -- but beyond that, we should expect entropy to effect things which are not causally upstream of the functionality of this set of neurons. Since strokes happen often, and don't always cause loss of consciousness, we can expect them to eventually occur for every non-essential (for consciousness) region of the brain. Because people can experience nerve damage to their sensory neurons without losing consciousness, we can expect that the ability to experience physical pain will decay. Emotional pain doesn't seem to be that qualitatively different from physical pain (e.g. is also mitigated by NSAIDs), so I expect this will be true for pain in general.
So most of your body and most of your mind will still decay as normal, only the absolutely essential neuronal circuitry (and whatever else, perhaps blood circulation) to induce a valid next conscious moment will miraculously survive. Anesthesia works by globally reducing synapse activity. So the initial stages this would likely feel like going under anesthesia, but where you never quite go out. Because anesthetics stop pain (remember this is still true if applied locally), and because by default, we do not experience pain, I'm now pretty sure that given QI being real: infinite agony is very unlikely.
I wrote the article quoted above. I think I understand your feelings as when I came to the idea of QI, I realised - after first period of excitement - that it implies the possibility of eternal sufferings. However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.
If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists, ask professional help.
While death is impossible in QI setup, a partial death is still possible, when a person forgets those parts of him-her which want to die. Partial death has already happened many times with average adult person, when she forgets her childhood personality.
However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.
What if I don't agree?
If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists,...
If quantum torment is real, attempting suicide would only get you there faster. It would restrict your successor observer-moments into only those without enough control to choose to die (since those who succeed in the attempt have no successors). Locked-in syndrome and the like.
Signing up for cryonics, on the other hand, would probably be a good idea, since it would increase the diversity of possible future observer moments to include cases where you get revived.
Enlightenment (in the Buddhist sense) might possibly be an escape. Some rationalists seem to take the possibility seriously and say you don't have to believe in anything supernatural. Meditation is just happening in your brain. If you do reach Nirvana, perhaps you can decide not to suffer at all, even if you do get locked-in (or worse). This kind of sounds like wireheading to me, but if the alternative is Literally Hell, then maybe you should take the deal. (Epistemic status: I'm not enlightened or anything. I've just heard people talk about it.)
As someone who has had stream-entry, and the change-in-perception called Enlightenment... I endorse your read of it as being potentially useful in this case?
I'm going to give more details in a sub-comment, to give people who are already rolling their eyes a chance to skip over this.
Actually, I just realized there's no reason you would remain conscious in QI. Surely the damage to your brain and body would put you into a coma - a fate I'd like to avoid, but definitely better than Literally Hell.
Also, what is all this talk about suicide? All I said was that I plan to die normally. You guys are reading weird things into that...
The survivors are living in an area that shatters the illusion of classical reality. The survivor probabilities favour classical probabilities so you should be able to expect for things to get classical. In a non-classical universe it might be possible to get a very rapid regeneration that might bounce you far from the torment zone for a long time. Even if you do not get a particuarly stellar regeneration you will constantly be tunneling out of the torment zone too. At some point the tunneling to torment and tunneling to relief should balance out where you have 50% chance of being in a bad scenario and 50% chance of being in a good scenario. That is if you are sustained longer in a scenario that classically would be considered bad the less faith you can have that the mechanics of the scenario continue to work. It will either resolve to a classical situation different from current or it is such a jumbled mess that "being stuck in a bad place" is not representative.
In general the jumbling might also target your personality and then the question of how much of your alteration really counts as you starts to get relevant. If you escape with a cunning deduction you made because you tought you were Sherlock Holmes because cosmic rays fabricated your memories for it does that count as Sherlock Holmes or you waking up or both? One might need classical mechanics to maintain a sense of identity stability (that is the you on the next second has very similar personality) and when that is taken away it is not sure the concept applies with the same strength. Sure somebody concious will get to expererience a bunch of stuff and it will be structurally reminicient of you. But will it really be you?
I used to be heavily into this area, and after succumbing somewhat to an 'it all adds up to normality' shoulder-shrugging, my feeling on this is that it's not just the 'environment' that is subject to radical changes, but the mind itself. It might be that there's a kind of mind-state attractor, by which minds tend to move along predictable paths and converge upon weirdness together. All of consciousness may, by different ways of looking at it, be considered as fragments of that endstate.
Imagine a benevolent AI on a universal scale, that simulates the greates achievable number of copies of one specific "life". Namely if we imagine that it would simulate cintinuous states from emergence of consciousness to some form of nirvana. If we assume during brain death experience is getting simpler, to eventually reach the simplest observer moment (it would be identical to all dying minds) we can ask ourselves what than the next observer moment should be, and if we already have the simplest one next should be more complex, maybe if the complexity wo...
You've been basilisked. There is no empirical evidence for MWI, but a number of physicists do believe that it can be something related to reality, with some heavy modifications, since, as stated, it contradicts General Relativity. Sean Carroll, an expert in both Quantum Mechanics and General Relativity, is one of them. Consider reading his blog. His latest article about the current (pitiful) state of fundamental research in Quantum Mechanics, can be found in the New York Times. His book on the topic is coming out in a couple of days, and it is guaranteed to be a highly entertaining and insightful read, which might also alleviate some of your worries.
You've been basilisked.
Yes, but how plausible are such scenarios considered? If I die naturally? I don't find AI superintelligence very plausible.
What about that talk of being 'locked in a very unlikely but stable world'? Where is he getting that from?
There is no empirical evidence for MWI, but a number of physicists do believe that it can be something related to reality, with some heavy modifications, since, as stated, it contradicts General Relativity. Sean Carroll, an expert in both Quantum Mechanics and General Relativity, is one...
Some thought experiments that can contradict QI:
1b) Quantum temporary-death. Coming out of temporary unconscious states like temporary death (where your heart can stop beating (and you remain unconscious) for up to several hours) would be impossible, since you would also branch out into states of consciousness before any unconsciousness would settle in.
Entropy. Dodging death infinitely is impossible. There might be branches where you die and others where you survive, but even then, on the ones where you survive, your body is still decaying. To keep decaying forever and never die would simply contradict biology. After a certain threshold of damage, death is inevitable. But would this prolong itself for thousands of years of agony? No. It would occur under normal timelines. Maybe, say, if you get headshot, you die instantly in branch A and survive damaged on branch B, but it doesn't mean that you're not on your way to death on branch B if the damage is severe enough. If it isn't, you go to the hospital and make a recovery. In short: to keep suffering forever for eternity simply contradicts biology - no one can live forever, and damage will always lead to death (sooner or later, BUT on a normal biological timescale related to the different degrees of damage in each branch).
Maybe MWI is just bs, lol.
I think in short QI is Zeno's Paradox. Ancient Greek philosopher Zeno concluded that for each distance between A and B, you first have to reach halfway between A and B before getting to B. Therefore, you can never reach B, since you always have to reach the halfway point first, and there is always a halfway point, no matter how small the distance. This led Zeno into concluding that movement is impossible. In reality, we know that it isn't, you eventually will reach B, and in a normal timescale.
Even if MWI is true, you eventually will die (in normal timescales), just as you will eventually reach point B (in normal timescales).
I've been recently been obsessing over the risks of quantum torment, and in the course of my research downloaded this article: https://philpapers.org/rec/TURFAA-3
Here's a quote:
"4.3 Long-term inescapable suffering is possible
If death is impossible, someone could be locked into a very bad situation where she can’t die, but also can’t become healthy again. It is unlikely that such an improbable state of mind will exist for too long a period, like millennia, as when the probability of survival becomes very small, strange survival scenarios will dominate (called “low measure marginalization” by (Almond 2010). One such scenario might be aliens arriving with a cure for the illness, but more likely, the suffering person will find herself in a simulation or resurrected by superintelligence in our world, perhaps following the use of cryonics.
Aranyosi summarized the problem: “David Lewis’s point that there is a terrifying corollary to the argument, namely, that we should expect to live forever in a crippled, more and more damaged state, that barely sustains life. This is the prospect of eternal quantum torment” (Aranyosi 2012; Lewis 2004). The idea of outcomes infinitely worse than death for the whole of humanity was explored by Daniel (2017), who called them “s-risks”. If MI is true and there is no high-tech escape on the horizon, everyone will experience his own personal hell.
Aranyosi suggested a comforting corollary (Aranyosi 2012), based on the idea that multiverse immortality requires not remaining in the “alive state”, but remaining in the conscious state, and thus damage to the brain should not be very high. It means, according to Aranyosi, that being in the nearest vicinity of death is less probable than being in just “the vicinity of the vicinity”: the difference is akin to the difference between constant agony and short-term health improvement. However, it is well known that very chronic states of health exist which don’t affect consciousness are possible, e.g. cancer, whole-body paralysis, depression, and lock-in syndrome. However, these bad outcomes become less probable for people living in the 21st century, as developments in medical technology increase the number of possible futures in which any disease can be cured, or where a person will be put in cryostasis, or wake up in the next level of a nested simulation. Aranyosi suggested several other reasons why eternal suffering is less probable:
1) Early escape from a bad situation: “According to my line of thought, you should rather expect to always luckily avoid life-threatening events in infinitely many such crossing attempts, by not being hit (too hard) by a car to begin with. That is so because according to my argument the branching of the world, relevant from the subjective perspective, takes place earlier than it does according to Lewis. According to him, it takes place just before the moment of death, according to my reasoning it takes place just before the moment of losing consciousness”
(Aranyosi 2012, p.255).
2) Limits of suffering. “The more damage your brain suffers, the less you are able to suffer”
(Aranyosi 2012, p.257).
3) Inability to remember suffering. “Emergence from coma or the vegetative state is followed by amnesia is not an eternal life of suffering, but rather one extremely brief moment of possibly painful self-awareness – call it the ‘Momentary Life’ scenario.” (Aranyosi 2012, p.257).
4.4 Bad infinities and bad circles
Multiverse immortality may cause one to be locked into a very stable but improbable world – much like the scenario in the episode “White Christmas” of the TV series “Black Mirror (Watkins 2014),” in which a character is locked into a simulation of a room for a subjective 30 million years. Another bad option is a circular chain of observer-moments. Multiverse immortality does not require that the “next” moment will be in the actual future, especially in the timeless universe, where all moments are equally actual. Thus a “Groundhog Day” scenario becomes possible. The circle could be very short, like several seconds, in which a dying consciousness repeatedly returns to the same state as several seconds ago, and as it doesn’t have any future moments it resets to the last similar moment. Surely, this could happen only in a very narrow state of consciousness, where the internal clock and memory are damaged."
Look, I'm not at all knowledgeable in these matter (besides having read Permutation City and The Finale of the Ultimate Meta Mega Crossover). Based on what I've read online on the possibility of quantum immortality, I don't think it is probable, and quantum torment less so. But there's something about a published article giving serious consideration to us suffering eternally or going through 'The Jaunt' from that Stephen King story which is creating a nice little panic attack (in addition to the already scary David Lewis article).
I plan to die and have no intention of signing up for cryonics. (EDIT: This meant die naturally. I have no desire to expedite the process, it's just that I'm not on board with the techno-immortalism popular around here.) All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I'd be okay with that, as I'm not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?
I'm also desperate to get in contact with someone who's studied quantum mechanics and can answer questions of this nature. An actual physicist (especially a believer in MWI) would be great. I'd think an understanding of neuroscience would also be very important for analyzing the risks, but how many people have studied both fields? With some exceptions, the only ones I do see discussing it are philosophers.
I'm in a bad place right now; any help would go a long way.