All of qmotus's Comments + Replies

qmotus00

Actually, I'm just interested. I've been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it's a bad idea to bring it up.

qmotus10

Are people close to you aware that this is a reason that you advocate cryonics?

0gilch
I'm not sure what you're implying. Most people close to me are not even aware that I advocate cryonics. I expect this will change once I get my finances sorted out enough to actually sign up for cryonics myself, but for most people, cryonics alone already flunks the Absurdity heuristic. Likewise with many of the perfectly rational ideas here on LW, including the logical implications of quantum mechanics and cosmology, like Subjective Immortality. Linking more "absurditiess" seems unlikely to help my case in most instances. One step at a time.
qmotus00

What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a "subsequent" one, and the same seems to apply to all levels of a Tegmark multiverse.

0entirelyuseless
I don't think MWI is sufficiently well defined or understood for it to be known whether or not that is implied. For example it would not be the case in Robin Hanson's mangled worlds proposal, and no one knows whether that proposal is correct or not.
qmotus00

(I'm not convinced that the universe is large enough for patternism to actually imply subjective immortality.)

Why wouldn't it be? That conclusion follows logically from many physical theories that are currently taken quite seriously.

0MrMind
Such as? Subjective immortality isn't implied by MWI without further cosmological assumptions.
0philh
Fair enough. I have no argument and low confidence, it just seems vaguely implausible.
qmotus30

I'm not willing to decipher your second question because this theme bothers me enough as it is, but I'll just say that I'm amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I'd be glad about it.

qmotus00

I guess we've had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.

0entirelyuseless
Sure. Nonetheless, you should not expect anything noticeably different from that to happen either. The same kinds of things will happen: you will find yourself wondering why you were the lucky one who survived the car crash, not wondering why you were the unlucky one who did not.
qmotus00

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).

qmotus00

You also can't know if you're in a simulation, a Big quantum world, a big cosmological world, or if you're a reincarnation

But you can make estimates of the probabilities (EY's estimate of the big quantum world part, for example, is very close to 1).

So really I just go with my gut and try to generally make decisions that I probably won't think are stupid later given my current state of knowledge.

That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some... (read more)

1moridinamael
I meant the word "stupid" to carry a connotation of "obviously bad, obviously destroying value." Playing with my children rather than working extra hard to earn extra money to donate to MIRI will never be "stupid" although it may be in some sense the wrong choice if I end up being eaten by an AI. This is true for the same reasons that putting money in my 401K is obviously "not stupid", especially relative to giving that money to my brother-in-law who claims to have developed a new formula for weatherproofing roofs. Maybe my brother-in-law become a millionaire, but I'm still not going to feel like I made a stupid decision. You may rightly point out that I'm not being rational and/or consistent. I seem to be valuing safe, near-term bets over risky, long-term bets, regardless of what the payouts of those bets might be. Part of my initial point is that, as an ape, I pretty much have to operate that way in most situations if I want to remain sane and effective. There are some people who get through life by making cold utilitarian calculations and acting on even the most counterintuitive conclusions, but the psychological cost of behaving that way has not been worth it to me.
qmotus10

If you're looking for what these probabilities tell us about the underlying "reality"

I am. It seems to me that if quantum mechanics is about probabilities, then those probabilities have to be about something: essentially, this seems to suggest that either the underlying reality is unknown, indicating that quantum mechanics needs to be modified somehow, or that Qbism is more like an "interpretation of MWI", where one chooses to only care about the one world she finds herself in.

0n4r9
The QBist stance is that we "know" very little about the underlying reality. One of the only things that Chris Fuchs is willing to accept as an objective property of a quantum system is its Hilbert space dimension. I doubt it's sensible to talk about an interpretation of MWI. MWI says that the wavefunction is a real physical object and wavefunction splitting is something that's genuinely physically occurring. QBism denies that the wavefunction is a real physical object.
qmotus30

Fortunately, Native American populations didn't plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.

0Viliam
Maybe the aliens will bring some kind of nanotechnology that works okay with their ecosystem, but will destroy ours.
qmotus00

Thanks for the tip. I suppose I actually used to be pretty good at not giving too many fucks. I've always cared about stuff like human rights or climate change or, more lately, AI risk, but I've never really lost much sleep over them. Basically, I think it would be nice if we solved those problems and, but the idea that humanity might go extinct in the future doesn't cause me too much headache in itself. The trouble is, I think, that I've lately begun to think that I may have a personal stake in this stuff, the point illustrated by the EY post that I linked to. See also my reply to moridinamael.

qmotus00

The part about not being excited about anything sounds very accurate and is certainly a part of the problem. I've also tried just taking up projects and focusing on them, but I should probably try harder as well.

However, a big part of the problem is that it's not just that those things feel insignificant; it's also that I have a vague feeling that I'm sort of putting my own well-being in jeopardy by doing that. As I said, I'm very confused about things like life, death and existence, on a personal level. How do I focus on mundane things when I'm confused a... (read more)

3moridinamael
If there is One Weird Trick that you should using right now in order to game your way around anthropics, simulationism, or deontology, you don't know what that trick is, you won't figure out what that trick is, and it's somewhat likely that you can't figure out what that trick is because if you did you would get hammered down by the acausal math/simulators/gods. You also can't know if you're in a simulation, a Big quantum world, a big cosmological world, or if you're a reincarnation. Or one or more of those at the same time. And each of those realities would imply a different thing that you should be doing to optimize your ... whatever it is you should be optimizing. Which you also don't know. So really I just go with my gut and try to generally make decisions that I probably won't think are stupid later given my current state of knowledge.
qmotus20

I'm having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I'd like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.

But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and may... (read more)

1morganism
AI•ON is an open community dedicated to advancing Artificial Intelligence by: Drawing attention to important yet under-appreciated research problems. Connecting researchers and encouraging open scientific collaboration. Providing a learning environment for students looking to gain machine learning experience. http://ai-on.org/
2MrMind
I would suggest to read "The subtle art of not giving a fuck". It's about how to properly choose our own values, how often we are distracted by bigger or impossible goals that exhaust our mental focus and only bring unhappiness, and what are actual useful tinier values that bring much more happiness. It seems to be a perfect fit for your situation. It personally saved my life, but as with anything in self-help, your mileage may vary.
8moridinamael
I am essentially imagining you to be similar to me about five years ago. It sounds like you are not really excited about anything in your own life. You're probably more excited about far-future hypotheticals than about any project or prospect in your own immediate future. This is a problem because you are a primate who is psychologically deeply predisposed to be engaged with your environment and with other primates. I used to have similar problems of motivation and engagement with reality. At some point I just sort of became exhausted with it all and started working on "insignificant" projects like writing a book, working on an app, and raising kids. It turns out that focusing on things that are fun and engaging to work on is better for my mental health than worrying about how badly I'm failing to live up to my imagined ideal of a perfectly rational agent living in a Big World. If I find that I'm having to argue with myself that something is useful and I should do it, then I'm fighting my brain's deeply ingrained and fairly accurate Bullshit Detector Module. If I actually believe that a task is useful in the beliefs-as-constraints-for-anticipated-experience sense of "believe", then I'll just do it and not have any internal dialogue at all.
2morganism
I'd recommend to take up gardening, especially if you have a local community garden. Nothing like having your hands in the earth, to ground you. You will also then be surrounded with peaceful folk, who care for each other, and the land. Not a bad group to connect with. And you will be personally helping save the world, just by growing and planting some trees. If you do high value woods, like cherry, you will be taking CO2 permanently out of circulation, if the wood is used for making things. Jump on a bike, and go plant some apricots along old creek beds, will help stabilize the soil, and make food for people and animals. Even if you are living in the slums, you can go out and collect some lichen living on an old building, mix it up in a blender with whole milk, let it sit a couple days, then go spray it in the cracks in an old brick building, or the sides of old concrete walls, and it will help purify the air. If you do the same with a lichen you find growing on an old tree, and spread it to other living trees, it will fix nitrates from the air into plant usable nitrites. Just dealing daily with living, growing things is very powerful for the psyche. And growing things, actually producing food, and giving it away is a very powerful form of altruism. Or you can just get a grow light, and use that to help relax.....
WalterL110

I'd suggest you prioritize your personal security. Once you have an income that doesn't take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.

The reason I'd make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn't work out the fallout can be considerable.

qmotus00

I'm certainly not an instrumentalist. But the argument that MWI supporters (and some critics, like Penrose) generally make, and which I've found persuasive, is that MWI is simply what you get if you take quantum mechanics at face value. Theories like GRW have modifications to the well-established formalism that we, as far as I know, have no empirical confirmation of.

0TheAncientGeek
There are modified theories, there is no unequivocal "face value".
qmotus00

Fair enough. I feel like I have a fairly good intuitive understanding of quantum mechanics, but it's still almost entirely intuitive, and so is probably entirely inadequate beyond this point. But I've read speculations like this, and it sounds like things can get interesting: it's just that it's unclear to me how seriously we should take them at this stage, and also some of them take MWI as a starting point, too.

Regarding QBism, my idea of it is mostly based on a very short presentation of it by Rüdiger Schack at a panel, and the thing that confuses me is ... (read more)

0n4r9
Depends what you mean by "about". The (strong) Qbist perspective is that probabilities, including those derived from quantum theory, represent an agents beliefs concerning his future interactions with the world. If you're looking for what these probabilities tell us about the underlying "reality" then that's an open question, which Fuchs et al are still exploring.
1MrMind
Well, categorical quantum mechanics is a program under developement since 2008, and it gives you a quantum framework in any computational theory with enough symmetries (databases, linguistics, etc). It spawned quantum programming languages and a graphical calculus. So I think it's pretty succesful and has to be taken seriously, albeit it's far from being complete (it lacks a unified treatment of infinite systems, for example).
qmotus00

I'm not sure what you mean by OR, but if it refers to Penrose's interpretation (my guess, because it sounds like Orch-OR), then I believe that it indeed changes QM as a theory.

qmotus00

Guess I'll have to read that paper and see how much of it I can understand. Just at a glance, it seems that in the end they propose one of the modified theories like GRW interpretation might be the right way forward. I guess that's possible, but how seriously should we take those when we have no empirical reasons to prefer them?

0TheAncientGeek
Doesn' that rebound on the argument for MWI? Sincere and consistent instrumentalists may exist, but I think they are rare. What is much more common is for people to compartmentalise, to take and irrealist or instrumetalist stance about things that make them feel uncomfortable, while remaining cheerfully realist about other things. At the end of the day, being able to predict phenomena isn’t that exciting. People generally do science because they want to find out about the world. And “rationaists”, internet atheists and so on generally do have ontological commitments, to the non-existence of gods and ghosts, some view about whether or not we are ina matrix and so on.
qmotus20

If it doesn't fundamentally change quantum mechanics as a theory, is the picture likely to turn out fundamentally different from MWI? Roger Penrose, a vocal MWI critic, seems to wholeheartedly agree that QM implies MWI; it's just that he thinks that this means the theory is wrong. David Deutsch, I believe, has said that he's not certain that quantum mechanics is correct; but any modification of the theory, according to him, is unlikely to do away with the parallel universes.

QBism, too, seems to me to essentially accept the MWI picture as the underlying ont... (read more)

0TheAncientGeek
CI/OR is a different picture to MWI, yet neither change QM as a number-crunching theory. You have hit on the fundamental problems of empiricism: the correct interpretation of a data is underdetermined by data, and interpretations can differ radically with small changes in data or no changes in data.
0MrMind
These are difficult question because we are speculating about future mathematics / physics. First of all, there's the question of how much of the quantum framework will survive the unification with gravity. Up until now, all theories that worked inside it have failed; worse, they have introduced black-hole paradoxes: most notably, thunderbolts and the firewall problem. I'm totally in the dark if a future unification will require a modification of the fundamental mathematical structure of QM. Say, if ER = EPR, and entanglement can be explained with a modified geometry of space-time, does it mean that superposition is also a geometrical phoenomenon that doesn't require multiple worlds? I don't really know. But more on the point, I think (hope?) that future explorations of the quantum framework will yield an expanded landscape, where interpretations will be seen as the surface phoenomenon of something deeper: for example, something akin to what happens in classical mechanics with the Hamiltonian / Lagrangian formulations. On a side note, I've read only the Wikipedia article on QBism and my impression was that it had an epistemological leaning, not ontological: if you use only SIC-POVMs, you can explain all quantum quirks with the epistemology of probability distributions. I might be very wrong, though.
qmotus00

Do you think that we're likely to find something in those directions that would give a reason to prefer some other interpretation than MWI?

-2TheAncientGeek
We've already got a number of problems with MW -- see Dowker and Kent's paper. The question is whether there is anything better. To go back to my original question, EY appears not to have heard of QBism, RQM, and other interpretations that aren't mentioned in The Fabric of Reality.
0MrMind
My idea is more on the line of "in the future we are going to grasp a conceptual frame that would make sense of all interpretations" (or explain them away) rather than pointing to a specific interpretation.
qmotus10

It could be that reality has nasty things in mind for us that we can't yet see and that we cannot affect in any way, and therefore I would be happier if I didn't know of them in advance. Encountering a new idea like this that somebody has discovered is one my constant worries when browsing this site.

qmotus00

Wouldn't that mean surviving alone?

0turchin
Looks like alone ((( not nice. But QI favours the worlds there I am more able survive, so may be I will have some kinds of superpowers or will be uploaded. So I will probably able to create several friends, but not many, as it would change probability distribution in DA, and so makes this outcome less probable. Another option is put me in the situation where my life or death is univocally connected with live of group of people (e.g. if we all in one submarine). In with case we will all survive.
qmotus-10

MUH has a certain appeal, but its problems as well, as you say (and substituting CUH for MUH feels a little ad hoc to me), and I fear parsimony can lead us astray here in any case. I still think it's a good attempt, but we should not be too eager to accept it.

Maybe you should make a map of reasons for why this question matters. It's probably been regarded as an uninteresting question since it is difficult (if not impossible) the test empirically, and because of this humanity has overall not directed enough brainpower to solving it.

-1turchin
If we able to find the answer, it will provide as with exact knowledge of the nature of reality and solve all remaining questions about qualia and consciousness. The last will help to solve the problem of the nature of mind and personal identity and help with creation of AI and uploading. So, it will help us create safe AI and reach immortality. That is very practical goals. The answer will also include a Theory of everything, which will provide us with complete understanding of physics. So to have the answer is useful. We also have some evidences for possible solution. In preambula to the map I listed 5 types of possible evidences. But non of them is definitive.
qmotus40

Uh, I think you should format your post so that somebody reading that warning would also have time to react to it and actually avoid reading the thing you're warning about.

2turchin
Done using inverted text
qmotus00

With those assumptions (especially modal realism), I don't think your original statement that our simulation was not terminated this time doesn't quite make sense; there could be a bajillion simulations identical to this, and even if most of them we're shut down, we wouldn't notice anything.

In fact, I'm not sure what saying "we are in a simulation" or "we are not in a simulation" exactly means.

0turchin
What you say is like quantum immortality for many simulation world. Lets name it "simulation immortality" as there is nothing quantum about it. I think that it may be true. But it requires two conditions: many simulations and identity problem solution (is a copy of me in remote part of the universe is me.) I wrote about it here: http://lesswrong.com/lw/n7u/the_map_of_quantum_big_world_immortality/ Simulation immortality precisely neutralise risks of the simulation been shut down. But if we agree with quantum immortality logic, it works even broader, preventing any other x-risk, because in any case one person (observer in his timeline) will survive. In case of simulation shutdown it works nicely if it will be shutdown instantaneous and uniformly. But if servers will be shut one by one, we will see how stars will disappear, and for some period of time we will find ourselves in strange and unpleasant world. Shut down may take only a millisecond in base universe, but it could take long time in simulated, may be days. Slow shutdown is especially unpleasant scenario for two reasons connected with "simulation immortality". 1. Because of simulation immortality, its chances are rising dramatically: If for example 1000 simulations is shutting down and one of them is shutting slowly, I (my copy) will find my self only in the one that is in slow shut down. 2. If I find my self in a slow shutdown, there is infinitely many the same simulations, which are also experience slow shutdown. In means that my slow shutdown will never end from my observer point of view. Most probably after all this adventure I will find my self in the simulation which shutdown was stopped or reversed, or stuck somewhere in the middle. TL;DR: So shut down of the simulation may be observed and may be unpleasant and it is especially likely if there are infinitely many simulations. It will look like very strange global catastrophe from the observer point of view. I also wrote a lot of such things in m
qmotus00

It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions).

That's one thing. But also, let's say that we choose Plan B, and this is taken as a sign that reducing emissions is unnecessary and global emissions soar. We then start pumping aerosols into the atmosphere to cool the climate.

Then something happens and this process stops: we face unexpected technical hurdles, or maybe the implementation of this plan has been largely left to a smallish number... (read more)

0turchin
One more thing I would like to add: The management of climate risks depends of their predictability and it seems that it is not very high. Climate is very complex and chaotic system. It may react unexpectedly on our actions. This means that longterm actions are less favourable. The situation could change many times during their implementation. The quick actions like solar management are better for management of poor predictable processes, as we could see result of our action and quickly cancel them or make them stronger if we don't like the results.
0turchin
I would also advocate for the mixture of both plans. One more reason for it is that they work on different timescale. Cutting emission and removing CO2 on current level of technologies would takes decades to have an impact on climate. But geo-engineering has reaction time around 1 year so we could use it to cover bumps on the road. Such covering will be especially important if we consider the fact that even if we completely stop emissions, we could also stop global dimming from coal burning which would result in 3 C jump. Stopping emissions may result in temperature jump and we need protection system in this case. Anyway we need to survive until stronger technologies. Using nanotech or genetic engineering we could solve worming problem with smaller efforts. But we have to survive until with date. It looks for me that cutting emissions is overhyped and solar management is "underhyped" in public opinion and funding. And by changing with misbalance we could get more common good.
qmotus00

I would still be a bit reluctant to advocate climate engineering, though. The main worry, of course, is that if we choose that route, we need to commit to in the long term, like you said. Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that there safe?. Of course, if we be... (read more)

0Lumifer
I don't know about that. I would expect the main worry to be that the Law of Unintended Consequences will do its usual thing except this time the relative size of its jaws compared to our ass will be... rather large.
0turchin
In current political situation in the world cutting emissions can't be implemented. Point. It may happening naturally in 20 years after electric transportation will take place. Plan B should be implemented if situation suddenly change to worse. If temperature jumps 3-5 C in one year. In this case the only option we had is to bomb Pinatubo volcano to make it erupting again. But if we will have prepared and tested measures of Sun shielding, we could start them if situation will be worsening. It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions). But the same logic works in the opposite direction. They will not cut emission to press policymakers to implement plan B. ))) It looks like prisoners dilemma of two plans.
qmotus00

I think many EAs consider climate change to be very important, but often just think that it receives a lot of attention already and solving it is difficult, and that there are therefore better things to focus on. Like 80 000 hours for example.

qmotus00

Will your results ultimately take the form of blog posts such as those, or peer-reviewed publications, or something else?

I think FRI's research agenda is interesting and that they may very well work on important questions that hardly anyone else does, but I haven't yet supported them as I'm not certain about their ability to deliver actual results or the impact of their research, and find it a tad bit odd that it's supported by effective altruism organizations, since I don't see any demonstration of effectiveness so far. (No offence though, it looks promising.)

1Kaj_Sotala
The final output of this project will be a long article, either on FRI's website or a peer-reviewed publication or both; we haven't decided on that yet.
qmotus00

I wouldn't call cryonics life extension; sounds more like resurrection to me. And, well, "potentially indefinite life extension" after that, sure.

0turchin
Another good wording is "respawning", which came from computer games, and means ability of a character to appear again, something like resurrection.
qmotus10

I bet many LessWrongers are just not interested in signing up. That's not irrational, or rational, it's just a matter of preferences.

qmotus20

either we are in a simulation or we are not, which is obviously true

Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be "real minds" dwelling in "real brains", and some would be simulated.

qmotus00

If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.

Well.. Let's say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t - 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply?

Does it make any sense to you to say that you expect to experience both being and not being tickled?

qmotus00
  1. Maybe; it would probably think so, at least if it wasn't told otherwise.

  2. Both would probably think so.

  3. All three might think so.

  4. I find that a bit scary.

  5. Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

0woodchopper
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser. So go back to the scenario - you're killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing 'you', so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?
qmotus00

Let's suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn't persist in such situations?

0woodchopper
So, let's say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that 'you'? If it is, let's say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is 'you'? Let's say the second one is 'you', and the first one isn't. What happens when the computer reconstructs yet another exact copy of your brain? If the computer told you it was going to torture the slightly-wrong copy of you (the one with a few atoms missing), would that scare you? What if it was going to torture the exact copy of you, but only one of the exact copies? There's a version of you not being tortured, what's to say that won't be the real 'you'?
qmotus00

If there's no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of 'you' is not actually 'you', I guess you ('you'?) will indeed need to find a way to extend your biological life. If you're happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we're not going to "find out" the right answer to those questions if there is no right answer.

But I think the concept of personal identity is inextricably linked to the question of how separate consciou

... (read more)
0woodchopper
I think consciousness arises from physical processes (as Denett says), but that's not really solving the problem or proving it doesn't exist. Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it's hard to say you are wrong. However, what if I don't actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real 'us' existing and what don't. What if the persistence of personal identity is a meaningless pursuit?
qmotus00

Isn't it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn't; or that a simulation of you either is or isn't you; but there's no objective right answer. It is worth nothing, though, that if you don't tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.

0woodchopper
If there's no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of 'you' is not actually 'you', would seeking immortality mean we can't upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around? If we found out that there's a new 'you' every time you go to sleep and wake up, wouldn't it make sense to abandon the quest for immortality as we already die every night? (Note, I don't actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)
qmotus20

I suppose so, and that's where the problems for consequentialism arise.

qmotus00

What I've noticed is that this has caused me to slide towards prioritizing issues that affect me personally (meaning that I care somewhat more about climate change and less about animal rights than I have previously done).

qmotus20

Past surveys show that most LessWrongers are consequentialists, and many are also effective altruism advocates. What do they think of infinities in ethics?

As I've intuitively always favoured some kind of negative utilitarianism, this has caused me some confusion.

0James_Miller
Doesn't anthropics strongly push us to figure that the universe is infinite?
3RowanE
I'll come in to say yes I agree these problems are confusing, although my ethics are weird and I'm only kind if a consequentialist. (I identify as amoral, in practice what it means is I act like an egoist but give consequentialist answers to ethical questions)
qmotus20

Peak oil said we'd run out of oil Real Soon Now, full stop

Peak oil refers to the moment when the production of oil has reached a maximum and after which it declines. It doesn't say that we'll run out of it soon, just that production will slow down. If consumption increases at the same time, it'll lead to scarcity.

If you are trying to rebuild you don't need much oil

Well, that probably depends on how much damage has been done. If civilization literally had to be rebuilt from scratch, I'd wager that a very significant portion of that cheap oil would have to be used.

-1Lumifer
Oh, yes it does.
qmotus30

Besides, can we now finally admit peak oil was wrong?

Unfortunately, we can't. While we're not going to run out of oil soon (in fact, we should stop burning it for climate reasons long before we do; also, peak oil is not about oil depletion), we are running out of cheap oil. The EROEI of oil has fallen significantly since we started extracting it on a large scale.

This is highly relevant for what is discussed here. In the early 20th century, we could produce around 100 units of energy from oil for every unit of energy we used to extract it; those rebuilding the civilization from scratch today or in the future would have to make do with far less.

1Lumifer
I am sure we can. Peak oil said we'd run out of oil Real Soon Now, full stop. The cost of oil has been rising since early XX century, as you point out, that's not what peak oil was all about. Again, we have confusion of technology and scale. The average cost of oil extraction is higher than it used to be. But that cost varies, considerably. If you are trying to rebuild you don't need much oil, so you only use the cheapest oilfields (e.g. the Saudi ones) and don't try to pave over the North Sea with oil rigs or set them up all over the Arctic.
qmotus00

Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).

Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think wha... (read more)

qmotus00

I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics.

Turchin may have something else in mind, but personally (since I've also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to de... (read more)

1gjm
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
qmotus00

I find that about as convincing as "if you see a watch there must be a watchmaker" style arguments.

I don't see the similarity here.

There are a number of ways theorized to test if we're in various kinds of simulation and so far they've all turned up negative.

Oh?

String theory is famously bad at being usable to predict even mundane things even if it is elegant and "flat" is not the same as "infinite".

It basically makes no new testable predictions right now. Doesn't mean that it won't do so in the future. (I have no opi... (read more)

qmotus00

As yet we have ~zero evidence for being in a simulation.

We have evidence (albeit no "smoking-gun evidence") for eternal inflation, we have evidence for a flat and thus infinite universe, string theory is right now our best guess at what the theory of everything is like; these all predict a multiverse where everything possible happens and where somebody should thus be expected to simulate you.

Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You're just making it more likely that one fork of yourself might wak

... (read more)
1HungryHobo
I find that about as convincing as "if you see a watch there must be a watchmaker" style arguments. There are a number of ways theorized to test if we're in various kinds of simulation and so far they've all turned up negative. String theory is famously bad at being usable to predict even mundane things even if it is elegant and "flat" is not the same as "infinite".
qmotus-10

you can somewhat salvage traditional notions of fear ... Simulationist Heaven ... It does take the sting off death though

I find the often prevalent optimism on LW regarding this a bit strange. Frankly, I find this resurrection stuff quite terrifying myself.

I am continuously amused how catholic this cosmology ends up by sheer logic.

Yeah. It does make me wonder if we should take a lot more critical stance towards the premises that lead us to it. Sure enough, the universe is under no obligation to make any sense to us; but isn't it still a bit suspicious that it's turning out to be kind of bat-shit insane?

qmotus00

Of course not. But whether people here agree with him or not, they usually at least think that his arguments need to be considered seriously.

qmotus00

I don't believe in nested simulverse etc

You mean none of what I mentioned? Why not?

but I feel I should point out that even if some of those things were true waking up one way does not preclude waking up one or more of the other ways in addition to that.

You're right. I should have said "make it more likely", not "make sure".

0HungryHobo
Same reason I don't believe in god. As yet we have ~zero evidence for being in a simulation. Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You're just making it more likely that one fork of yourself might wake up in friendly hands.
qmotus10

I think the point is that if extinction is not immediate, then the whole civilisation can't exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.

0akvadrako
It doesn't really matter if it's immediate according to empty individualism. Instead the chance of survival in the branches where you try to die must be much lower than the chance of choosing that world. You can never make a perfect doomsday device, because all kinds of things could happen to make it fail at the moment or during preparation. Even if it operates immediately.
qmotus00

By "the preface" do you mean the "memetic hazard warnings"?

Yes.

I don't think that is claiming that it is a rational response to claims about the word.

I don't get this. I see a very straightforward claim that cryonics is a rational response. What do you mean?

This is a quantum immortality argument. If you actually believe in quantum immortality, you have bigger problems. Here is Eliezer offering cryonics as a solution to those, too.

I've read that as well. It's the same argument, essentially (quantum immortality doesn't actually... (read more)

Load More