DanielVarga comments on References & Resources for LessWrong - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (81)
No, that's not what would happen. Rather, being faithful to your commitment, you would go on a practically infinite suicide spree (*) searching for your father. A long and melancholic story with a suprise happy ending.
(*) I googled it and was sad to see that the phrase "suicide spree" is already taken for a different concept.
I'm not sure where you think we disagree? Personally if I was going to take MWI and quantum suicide absolutely seriously I'd still make the best out of every branch. All you do by quantum suicide is to cancel out the copies you deem having unworthy experiences. But why would I do that if I do not change anything about the positive branches.
My reply wasn't meant to be taken seriously, and I don't take the idea of quantum suicide seriously. But to answer your question, here is the disagreement, or really, me nitpicking for the sake of comedic effect:
In your scenario, most of the copies will NOT be in universes with your father. Most of them will be in the process of committing suicide. This is because -- at least the way I interpreted your wording -- your scenario differs from the classic quantum lottery scenario in that here it is you who evaluates whether you are in the right universe or not.
Yes, we agree. So how serious do you take MWI? I'm not sure I understand how someone could take MWI seriously but not quantum suicide. I haven't read the sequence on it yet though.
Easy - if you believe in MWI, but your utility function assigns value to the amount of measure you exist in, then you don't believe in quantum suicide. This is the most rational position, IMO.
I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.
(*) If God builds a perfect copy of the whole universe, this will not increase my utility the slightest.
The is a potentially coherent value system but I note that it contains a distinct hint of arbitrariness. You could, technically, like life, dislike death, like happy relatives and care about everything in the branches in which you live but only care about everything except yourself in branches in which you die. But that seems likely to be just a patch job on the intuitions.
Are you sure about this? Isn't my preference simply a result of a value system that values the happiness of living beings in every branch? (Possibly weighted with how similar / emotionally close they are to me, but that's not really necessary.) If I kill myself in every branch except in those where I win the lottery, then there will be many branches with (N-1) sad relatives, and a few branches with 1 happy me and (N-1) neutral relatives. So I don't do that. Is there really anything arbitrary about this?
The part that surprises me is that you do care about all the branches (relatives, etc) yet in those branches you don't care if you die. You'll note that I assumed you preferred death to life? In those worlds you seem to have a preference for happy vs sad relatives but have somehow (and here is where I would say 'arbitrarily') decided you don't care whether you live or die.
Say, for example, that you have a moderate aversion to having one of your little toes broken. You set up a quantum lottery where in the 'lose' branches have your little toe broken instead of you being killed. Does that seem better or worse to you? I mean, there is suffering of someone near and dear to you so I assume that seems bad to you. Yet it seems to me that if you care about the branch at all then you would prefer 'sore toe' to 'death' when you lose!
You are right that my proposed value system does not incorporate survival instinct, and this makes it sound weird, as survival instinct is an important part of every actual human value system, including mine. Your broken toe example shows this nicely.
So why did I get rid of survival instinct? Because you argued that what I wrote "contains a distinct hint of arbitrariness". I think it doesn't. I care for everyone's preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.
When we explicitly add survival instinct, the ingredient you rightfully miss, then yes, the result will indeed become somewhat messy. But the reason for this mess is the added ingredient in itself, not the other, clean part, nor the interrelation with the other part. I just don't think survival instinct can be turned into a coherent, formalized value. So the bug is not in my proposed idealized value system, the bug is in my actual messy human value system.
This approach, by the way, affects my views on cryonics, too.
Surely actually performing quantum suicide would be very stupid.
I get the impression that some people consider "take quantum suicide seriously" equivalent to "think doing it is a good idea". That makes not taking it seriously a good option.