Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gilch 22 May 2017 10:29:30PM 0 points [-]

I'm not sure what you're implying. Most people close to me are not even aware that I advocate cryonics. I expect this will change once I get my finances sorted out enough to actually sign up for cryonics myself, but for most people, cryonics alone already flunks the Absurdity heuristic. Likewise with many of the perfectly rational ideas here on LW, including the logical implications of quantum mechanics and cosmology, like Subjective Immortality. Linking more "absurditiess" seems unlikely to help my case in most instances. One step at a time.

Comment author: qmotus 25 May 2017 02:42:28PM 0 points [-]

Actually, I'm just interested. I've been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it's a bad idea to bring it up.

Comment author: gilch 20 May 2017 12:33:24AM 2 points [-]

I think it does imply subjective immortality. I'll bite that bullet. Therefore, you should sign up for cryonics.

Consciousness isn't continuous. There can be interruptions, like falling asleep or undergoing anesthesia. A successor mind/pattern is a conscious pattern that remembers being you. In the multiverse, any given mind has many many successors. It doesn't have to follow immediately, or even have to follow at all, temporally. At the separations implied even for a Tegmark Level I multiverse, past and future are meaningless distinctions, since there can be no interactions.

You are your mind/pattern, not your body. A mind/pattern is independent of substrate. Your unconscious, sleeping self is not your successor mind/pattern. It's an unconscious object that has a high probability of creating your successor (i.e. it can wake up). Same with your cryonicically-preserved corpsicle, though the probability is lower.

Any near-death event will cause grievous suffering to any barely-surviving successors, and grief and loss to friends and relatives in branches where you (objectively) don't survive. I don't want to suffer grievous injury, because that would hurt. I also don't want my friends and relatives to suffer my loss. Thus, I'm reluctant to risk anything that may cause objective death.

But, the universe being a dangerous place, I can't make that risk zero. By signing up for cryonics, I can increase the measure of successors that have a good life, even after barely surviving.

In the Multiverse, death isn't all-or-none, black or white. A successor is a mind that remembers being you. It does not have to remember everything. If you take a drug that causes you to not form long-term memory of any event today, have you died by the next day? Objectively, no. Your friends and relatives can still talk to "you" the next day. Subjectively, partially. Your successors lack certain memories. But people forget things all the time.

Being mortal in the multiverse, you can expect that your measure of successors will continue to diminish as your branches die, but the measure never reaches absolute zero. Eventually all that remains are Bolzman Brains and the like. The most probable Boltzman brain successors only live long enough to have a "single" conscious qualia of remembering being you. The briefest of conscious thoughts. Their successors remember that thought and may have another random thought. You can eventually expect an eternity of totally random qualia and no control at all over your experience.

This isn't Hell, but Limbo. Suffering is probably only a small corner of possible qualia-space, but so is eudaimonia. After an eternity you might stumble onto a small Botzlman World where you have some measure of control over your utility for some brief time, but that world will die, and your successors will again be only Boltzman brains.

I can't help that some of my successors from any given moment are Boltzman brains. But I don't want my only successors to be Boltzman Brains, because they don't increase my utility. Therefore, cryonics.

See the Measure Problem of cosmology. I'm not certain of my answer, and I'd prefer not to bet my life on it, but it seems more likely than not. I do not believe that Boltzman Brains can be eliminated from cosmology, only that they have lesser measure than evolved beings like us. This is because of the Trivial Theorem of Arithmetic: almost all natural numbers are really damn huge. The universe doesn't have to be infinite to get a Tegmark Level I multiverse. It just has to be sufficiently large.

Comment author: qmotus 22 May 2017 01:18:16PM 1 point [-]

Are people close to you aware that this is a reason that you advocate cryonics?

Comment author: MrMind 17 May 2017 04:17:07PM 0 points [-]

Such as? Subjective immortality isn't implied by MWI without further cosmological assumptions.

Comment author: qmotus 22 May 2017 07:52:24AM 0 points [-]

What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a "subsequent" one, and the same seems to apply to all levels of a Tegmark multiverse.

Comment author: philh 16 May 2017 09:30:50AM *  0 points [-]

I agree that patternism contingently implies subjective immortality, but I agree with Oscar Cunningham that subjective immortality does not imply not-caring about death. I think patternism is stronger than beliefs that cause people to sign up for cryonics or step into the teleporter or read (even agree with) the QM sequence.

(I'm not convinced that the universe is large enough for patternism to actually imply subjective immortality.)

jul jrer lbh noyr gb haqretb narfgurfvn? (hayrff lbh pbagraq lbh jrer fgvyy pbafpvbhf rira gura)

(Fgvchyngvat ynetr-havirefr cnggreavfz.) Gurer'f na vafgnagvngvba bs zl cnggrea gung unf gur fhowrpgvir rkcrevrapr bs orvat tvira narfgurfvn naq erznvavat pbafpvbhf. Gurer'f znal bs gubfr. Ohg vg'f abg gur vafgnagvngvba gung rkvfgf ba rnegu, juvpu unf gur fhowrpgvir rkcrevrapr bs orvat tvira narfgurfvn naq gura jnxvat hc. Nyfb, nyzbfg nyy bs gubfr bgure cnggreaf qrpburer vagb fbzrguvat ragveryl hayvxr gur cnggrea ba rnegu.

Comment author: qmotus 17 May 2017 10:01:07AM 0 points [-]

(I'm not convinced that the universe is large enough for patternism to actually imply subjective immortality.)

Why wouldn't it be? That conclusion follows logically from many physical theories that are currently taken quite seriously.

Comment author: strangepoop 15 May 2017 11:36:23PM *  4 points [-]

Why does patternism [the position that you are only a pattern in physics and any continuations of it are you/you'd sign up for cryonics/you'd step into Parfit's teleporter/you've read the QM sequence]

not imply

subjective immortality? [you will see people dying, other people will see you die, but you will never experience it yourself]

(contingent on the universe being big enough for lots of continuations of you to exist physically)

I asked this on the official IRC, but only feep was kind enough to oblige (and had a unique argument that I don't think everyone is using)

If you have a completely thought out explanation for why it does imply that, you ought never to be worried about what you're doing leading to your death (maybe painful existence, but never death), because there would be a version of you that would miraculously escape it.

If you bite that bullet as well, then I would like you to formulate your argument cleanly, then answer this (rot13):

jul jrer lbh noyr gb haqretb narfgurfvn? (hayrff lbh pbagraq lbh jrer fgvyy pbafpvbhf rira gura)

ETA: This is slightly different from a Quantum Immortality question (although resolutions might be similar) - there is no need to involve QM or its interpretations here, even in a classical universe (as long as it's large enough), if you're a patternist, you can expect to "teleport" to another exact clone somewhere that manages to live.

Comment author: qmotus 16 May 2017 07:55:07AM 2 points [-]

I'm not willing to decipher your second question because this theme bothers me enough as it is, but I'll just say that I'm amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I'd be glad about it.

Comment author: entirelyuseless 16 May 2017 04:56:58AM 0 points [-]

As I've pointed out before, we don't need to say whether patternism is true, or whether the universe is big or not, to notice that we are subjectively non-mortal -- no matter what is the case, we will never experience dying (in the sense of going out of existence.)

Comment author: qmotus 16 May 2017 07:34:25AM 0 points [-]

I guess we've had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.

Comment author: fubarobfusco 20 March 2017 05:59:58PM *  4 points [-]

One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

Comment author: qmotus 21 March 2017 09:47:44AM 0 points [-]

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).

Comment author: moridinamael 28 October 2016 04:02:27PM 2 points [-]

If there is One Weird Trick that you should using right now in order to game your way around anthropics, simulationism, or deontology, you don't know what that trick is, you won't figure out what that trick is, and it's somewhat likely that you can't figure out what that trick is because if you did you would get hammered down by the acausal math/simulators/gods.

You also can't know if you're in a simulation, a Big quantum world, a big cosmological world, or if you're a reincarnation. Or one or more of those at the same time. And each of those realities would imply a different thing that you should be doing to optimize your ... whatever it is you should be optimizing. Which you also don't know.

So really I just go with my gut and try to generally make decisions that I probably won't think are stupid later given my current state of knowledge.

Comment author: qmotus 14 November 2016 08:53:30AM *  0 points [-]

You also can't know if you're in a simulation, a Big quantum world, a big cosmological world, or if you're a reincarnation

But you can make estimates of the probabilities (EY's estimate of the big quantum world part, for example, is very close to 1).

So really I just go with my gut and try to generally make decisions that I probably won't think are stupid later given my current state of knowledge.

That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some cases, the decision that would be not-stupid in a big world scenario could be the complete opposite of what would make sense in a non-big world situation.

In response to comment by qmotus on Quantum Bayesianism
Comment author: n4r9 05 November 2016 01:51:32PM *  0 points [-]

Depends what you mean by "about". The (strong) Qbist perspective is that probabilities, including those derived from quantum theory, represent an agents beliefs concerning his future interactions with the world. If you're looking for what these probabilities tell us about the underlying "reality" then that's an open question, which Fuchs et al are still exploring.

In response to comment by n4r9 on Quantum Bayesianism
Comment author: qmotus 14 November 2016 08:47:58AM 1 point [-]

If you're looking for what these probabilities tell us about the underlying "reality"

I am. It seems to me that if quantum mechanics is about probabilities, then those probabilities have to be about something: essentially, this seems to suggest that either the underlying reality is unknown, indicating that quantum mechanics needs to be modified somehow, or that Qbism is more like an "interpretation of MWI", where one chooses to only care about the one world she finds herself in.

Comment author: Lumifer 25 October 2016 02:39:08PM 2 points [-]

Because it would not fit into our values to consider exterminating them as the primary choice.

Did you ask the Native Americans whether they hold a similar opinion?

Comment author: qmotus 25 October 2016 03:30:38PM 2 points [-]

Fortunately, Native American populations didn't plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.

View more: Next