Are people close to you aware that this is a reason that you advocate cryonics?
What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a "subsequent" one, and the same seems to apply to all levels of a Tegmark multiverse.
(I'm not convinced that the universe is large enough for patternism to actually imply subjective immortality.)
Why wouldn't it be? That conclusion follows logically from many physical theories that are currently taken quite seriously.
I'm not willing to decipher your second question because this theme bothers me enough as it is, but I'll just say that I'm amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I'd be glad about it.
I guess we've had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.
I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).
You also can't know if you're in a simulation, a Big quantum world, a big cosmological world, or if you're a reincarnation
But you can make estimates of the probabilities (EY's estimate of the big quantum world part, for example, is very close to 1).
So really I just go with my gut and try to generally make decisions that I probably won't think are stupid later given my current state of knowledge.
That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some...
If you're looking for what these probabilities tell us about the underlying "reality"
I am. It seems to me that if quantum mechanics is about probabilities, then those probabilities have to be about something: essentially, this seems to suggest that either the underlying reality is unknown, indicating that quantum mechanics needs to be modified somehow, or that Qbism is more like an "interpretation of MWI", where one chooses to only care about the one world she finds herself in.
Fortunately, Native American populations didn't plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.
Thanks for the tip. I suppose I actually used to be pretty good at not giving too many fucks. I've always cared about stuff like human rights or climate change or, more lately, AI risk, but I've never really lost much sleep over them. Basically, I think it would be nice if we solved those problems and, but the idea that humanity might go extinct in the future doesn't cause me too much headache in itself. The trouble is, I think, that I've lately begun to think that I may have a personal stake in this stuff, the point illustrated by the EY post that I linked to. See also my reply to moridinamael.
The part about not being excited about anything sounds very accurate and is certainly a part of the problem. I've also tried just taking up projects and focusing on them, but I should probably try harder as well.
However, a big part of the problem is that it's not just that those things feel insignificant; it's also that I have a vague feeling that I'm sort of putting my own well-being in jeopardy by doing that. As I said, I'm very confused about things like life, death and existence, on a personal level. How do I focus on mundane things when I'm confused a...
I'm having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I'd like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.
But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and may...
I'd suggest you prioritize your personal security. Once you have an income that doesn't take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.
The reason I'd make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn't work out the fallout can be considerable.
I'm certainly not an instrumentalist. But the argument that MWI supporters (and some critics, like Penrose) generally make, and which I've found persuasive, is that MWI is simply what you get if you take quantum mechanics at face value. Theories like GRW have modifications to the well-established formalism that we, as far as I know, have no empirical confirmation of.
Fair enough. I feel like I have a fairly good intuitive understanding of quantum mechanics, but it's still almost entirely intuitive, and so is probably entirely inadequate beyond this point. But I've read speculations like this, and it sounds like things can get interesting: it's just that it's unclear to me how seriously we should take them at this stage, and also some of them take MWI as a starting point, too.
Regarding QBism, my idea of it is mostly based on a very short presentation of it by Rüdiger Schack at a panel, and the thing that confuses me is ...
I'm not sure what you mean by OR, but if it refers to Penrose's interpretation (my guess, because it sounds like Orch-OR), then I believe that it indeed changes QM as a theory.
Guess I'll have to read that paper and see how much of it I can understand. Just at a glance, it seems that in the end they propose one of the modified theories like GRW interpretation might be the right way forward. I guess that's possible, but how seriously should we take those when we have no empirical reasons to prefer them?
If it doesn't fundamentally change quantum mechanics as a theory, is the picture likely to turn out fundamentally different from MWI? Roger Penrose, a vocal MWI critic, seems to wholeheartedly agree that QM implies MWI; it's just that he thinks that this means the theory is wrong. David Deutsch, I believe, has said that he's not certain that quantum mechanics is correct; but any modification of the theory, according to him, is unlikely to do away with the parallel universes.
QBism, too, seems to me to essentially accept the MWI picture as the underlying ont...
Do you think that we're likely to find something in those directions that would give a reason to prefer some other interpretation than MWI?
It could be that reality has nasty things in mind for us that we can't yet see and that we cannot affect in any way, and therefore I would be happier if I didn't know of them in advance. Encountering a new idea like this that somebody has discovered is one my constant worries when browsing this site.
Wouldn't that mean surviving alone?
MUH has a certain appeal, but its problems as well, as you say (and substituting CUH for MUH feels a little ad hoc to me), and I fear parsimony can lead us astray here in any case. I still think it's a good attempt, but we should not be too eager to accept it.
Maybe you should make a map of reasons for why this question matters. It's probably been regarded as an uninteresting question since it is difficult (if not impossible) the test empirically, and because of this humanity has overall not directed enough brainpower to solving it.
Uh, I think you should format your post so that somebody reading that warning would also have time to react to it and actually avoid reading the thing you're warning about.
With those assumptions (especially modal realism), I don't think your original statement that our simulation was not terminated this time doesn't quite make sense; there could be a bajillion simulations identical to this, and even if most of them we're shut down, we wouldn't notice anything.
In fact, I'm not sure what saying "we are in a simulation" or "we are not in a simulation" exactly means.
It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions).
That's one thing. But also, let's say that we choose Plan B, and this is taken as a sign that reducing emissions is unnecessary and global emissions soar. We then start pumping aerosols into the atmosphere to cool the climate.
Then something happens and this process stops: we face unexpected technical hurdles, or maybe the implementation of this plan has been largely left to a smallish number...
I would still be a bit reluctant to advocate climate engineering, though. The main worry, of course, is that if we choose that route, we need to commit to in the long term, like you said. Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that there safe?. Of course, if we be...
I think many EAs consider climate change to be very important, but often just think that it receives a lot of attention already and solving it is difficult, and that there are therefore better things to focus on. Like 80 000 hours for example.
Will your results ultimately take the form of blog posts such as those, or peer-reviewed publications, or something else?
I think FRI's research agenda is interesting and that they may very well work on important questions that hardly anyone else does, but I haven't yet supported them as I'm not certain about their ability to deliver actual results or the impact of their research, and find it a tad bit odd that it's supported by effective altruism organizations, since I don't see any demonstration of effectiveness so far. (No offence though, it looks promising.)
I wouldn't call cryonics life extension; sounds more like resurrection to me. And, well, "potentially indefinite life extension" after that, sure.
I bet many LessWrongers are just not interested in signing up. That's not irrational, or rational, it's just a matter of preferences.
either we are in a simulation or we are not, which is obviously true
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be "real minds" dwelling in "real brains", and some would be simulated.
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.
Well.. Let's say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t - 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply?
Does it make any sense to you to say that you expect to experience both being and not being tickled?
Maybe; it would probably think so, at least if it wasn't told otherwise.
Both would probably think so.
All three might think so.
I find that a bit scary.
Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?
Let's suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn't persist in such situations?
If there's no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of 'you' is not actually 'you', I guess you ('you'?) will indeed need to find a way to extend your biological life. If you're happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we're not going to "find out" the right answer to those questions if there is no right answer.
...But I think the concept of personal identity is inextricably linked to the question of how separate consciou
Isn't it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn't; or that a simulation of you either is or isn't you; but there's no objective right answer. It is worth nothing, though, that if you don't tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.
I suppose so, and that's where the problems for consequentialism arise.
What I've noticed is that this has caused me to slide towards prioritizing issues that affect me personally (meaning that I care somewhat more about climate change and less about animal rights than I have previously done).
Past surveys show that most LessWrongers are consequentialists, and many are also effective altruism advocates. What do they think of infinities in ethics?
As I've intuitively always favoured some kind of negative utilitarianism, this has caused me some confusion.
Peak oil said we'd run out of oil Real Soon Now, full stop
Peak oil refers to the moment when the production of oil has reached a maximum and after which it declines. It doesn't say that we'll run out of it soon, just that production will slow down. If consumption increases at the same time, it'll lead to scarcity.
If you are trying to rebuild you don't need much oil
Well, that probably depends on how much damage has been done. If civilization literally had to be rebuilt from scratch, I'd wager that a very significant portion of that cheap oil would have to be used.
Besides, can we now finally admit peak oil was wrong?
Unfortunately, we can't. While we're not going to run out of oil soon (in fact, we should stop burning it for climate reasons long before we do; also, peak oil is not about oil depletion), we are running out of cheap oil. The EROEI of oil has fallen significantly since we started extracting it on a large scale.
This is highly relevant for what is discussed here. In the early 20th century, we could produce around 100 units of energy from oil for every unit of energy we used to extract it; those rebuilding the civilization from scratch today or in the future would have to make do with far less.
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).
Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think wha...
I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics.
Turchin may have something else in mind, but personally (since I've also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to de...
I find that about as convincing as "if you see a watch there must be a watchmaker" style arguments.
I don't see the similarity here.
There are a number of ways theorized to test if we're in various kinds of simulation and so far they've all turned up negative.
Oh?
String theory is famously bad at being usable to predict even mundane things even if it is elegant and "flat" is not the same as "infinite".
It basically makes no new testable predictions right now. Doesn't mean that it won't do so in the future. (I have no opi...
As yet we have ~zero evidence for being in a simulation.
We have evidence (albeit no "smoking-gun evidence") for eternal inflation, we have evidence for a flat and thus infinite universe, string theory is right now our best guess at what the theory of everything is like; these all predict a multiverse where everything possible happens and where somebody should thus be expected to simulate you.
...Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You're just making it more likely that one fork of yourself might wak
you can somewhat salvage traditional notions of fear ... Simulationist Heaven ... It does take the sting off death though
I find the often prevalent optimism on LW regarding this a bit strange. Frankly, I find this resurrection stuff quite terrifying myself.
I am continuously amused how catholic this cosmology ends up by sheer logic.
Yeah. It does make me wonder if we should take a lot more critical stance towards the premises that lead us to it. Sure enough, the universe is under no obligation to make any sense to us; but isn't it still a bit suspicious that it's turning out to be kind of bat-shit insane?
Of course not. But whether people here agree with him or not, they usually at least think that his arguments need to be considered seriously.
I don't believe in nested simulverse etc
You mean none of what I mentioned? Why not?
but I feel I should point out that even if some of those things were true waking up one way does not preclude waking up one or more of the other ways in addition to that.
You're right. I should have said "make it more likely", not "make sure".
I think the point is that if extinction is not immediate, then the whole civilisation can't exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.
By "the preface" do you mean the "memetic hazard warnings"?
Yes.
I don't think that is claiming that it is a rational response to claims about the word.
I don't get this. I see a very straightforward claim that cryonics is a rational response. What do you mean?
This is a quantum immortality argument. If you actually believe in quantum immortality, you have bigger problems. Here is Eliezer offering cryonics as a solution to those, too.
I've read that as well. It's the same argument, essentially (quantum immortality doesn't actually...
Actually, I'm just interested. I've been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it's a bad idea to bring it up.