nshepperd comments on Advice for AI makers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (196)
"Not observing a catastrophe" != "observing a non-catastrophe". If I'm playing russian roulette and I hear a click and survive, I see good reason to take that as extremely strong evidence that there was no bullet in the chamber.
But doesn't the anthropic argument still apply? Worlds where you survive playing russian roulette are going to be ones where there wasn't a bullet in the chamber. You should expect to hear a click when you pull the trigger.
As it stands, I expect to die (p=1/6) if I play russian roulette. I don't hear a click if I'm dead.
That's the point. You can't observe anything if you are dead, therefore any observations you make are conditional on you being alive.
Those universes where you die still exist, even if you don't observe them. If you carry your logic to its conclusion, there would be no risk to playing russian roulette, which is absurd.
The standard excuse given by those who pretend to believe in many worlds is that you are likely to get maimed in the universes where you get shot but don't die, which is somewhat unpleasant. If you come up with a more reliable way to quantum suicide, like using a nuke, they find another excuse.
Methinks that is still a lack of understanding, or a disagreement on utility calculations. I myself would rate the universes where I die as lower utility still than those were I get injured (indeed the lowest possible utility).
Better still if in all the universes I don't die.
I do think 'a disagreement on utility calculations' may indeed be a big part of it. Are you a total utilitarian? I'm not. A big part of that comes from the fact that I don't consider two copies of myself to be intrinsically more valuable than one - perhaps instrumentally valuable, if those copies can interact, sync their experiences and cooperate, but that's another matter. With experience-syncing, I am mostly indifferent to the number of copies of myself to exist (leaving aside potential instrumental benefits), but without it I evaluate decreasing utility as the number of copies increases, as I assign zero terminal value to multiplicity but positive terminal value to the uniqueness of my identity.
My brand of utilitarianism is informed substantially by these preferences. I adhere to neither average nor total utilitarianism, but I lean closer to average. Whilst I would be against the use of force to turn a population of 10 with X utility each into a population of 3 with (X + 1) utility each, I would in isolation consider the latter preferable to the former (there is no inconsistency here - my utility function simply admits information about the past).
That line of thinking leads directly to recommending immediate probabilistic suicide, or at least indifference to it. No thanks.
How so?
I'm saying that you can only observe not dying. Not that you shouldn't care about universes that you don't exist in or observe.
The risk in Russian roulette is, in the worlds where you do survive you will probably be lobotomized, or drop the gun shooting someone else, etc. Ignoring that, there is no risk. As long as you don't care about universes where you die.
Ok. I find this assumption absolutely crazy, but at least I comprehend what you are saying now.
Well think of it this way. You are dead/non-existent in the vast majority of universes as it is.
How is that relevant? If I take some action that results in the death of myself in some other Everett branch, then I have killed a human being in the multiverse.
Think about applying your argument to this universe. You shoot someone in the head, they die instantly, and then you say to the judge "well think of it this way: he's not around to experience this. besides, there's other worlds where I didn't shoot him, so he's not really dead!"
You can't appeal to common sense. That's the point of quantum immortality, it defies our common sense notions about death. Obviously, since we are used to assuming single-threaded universe, where death is equivalent to ceasing to exist.
Of course, if you kill someone, you still cause that person pain in the vast majority of universes, as well as grieving to their family and friends.
If star-trek-style teleportation was possible by creating a clone and deleting the original, is that equivalent to suicide/murder/death? If you could upload your mind to a computer but destroy your biological brain, is that suicide, and is the upload really you? Does destroying copies really matter as long as one lives on (assuming the copies don't suffer)?