MattG comments on Open Thread, January 4-10, 2016 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (430)
Maybe it's just the particular links I have been following (acausal trade and blackmail, AI boxes you, the Magnum Innominandum) but I keep coming across the idea that the self should care about the well-being (it seems to always come back to torture) of one or of a googleplex of simulated selves. I can't find a single argument or proof of why this should be so. I accept that perfectly simulated sentient beings can be seen as morally equal in value to meat sentient beings (or, if we accept Bostrom's reasoning, that beings in a simulation other than our own can be seen as morally equal to us). But why value the simulated self over the simulated other? I accept that I can care in a blackmail situation where I might unknowingly be one of the simulations (ala Dr Evil or the AI boxes me), but that's not the same as inherently caring about (or having nightmares about) what may happen to a simulated version of me in the past, present, or future.
Any thoughts on why thou shalt love thy simulation as thyself?
There is Bostrom's argument - but there's also another take on these types of scenario, which you may be confusing with the Bostrom argument. In those takes, you're not sure whether you're the simulation or the original - and since there's billions of simulations, there's a billion to one chance you'll be the one tortured.
Just make sure you're not pattern matching to the first type of argument when it's actually the second.
I appreciate the reply. I recognize both of those arguments but I am asking something different. If Omega tells me to give him a dollar or he tortures a simulation, a separate being to me, no threat that I might be that simulation (also thinking of the Basilisk here), why should I care if that simulation is one of me as opposed to any other sentient being?
I see them as equally valuable. Both are not-me. Identical-to-me is still not-me. If I am a simulation and I meet another simulation of me in Thunderdome (Omega is an evil bastard) I'm going to kill that other guy just the same as if he were someone else. I don't get why sim-self is of greater value than sim-other. Everything I've read here (admittedly not too much) seems to assume this as self-evident but I can't find a basis for it. Is the "it could be you who is tortured" just implied in all of these examples and I'm not up on the convention? I don't see it specified, and in "The AI boxes you" the "It could be you" is a tacked-on threat in addition to the "I will torture simulations of you", implying that the starting threat is enough to give pause.
If love your simulation as you love yourself, they will love you as they love themselves (and if you don't, they won't). You can choose to have enemies or allies with your own actions.
You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?
I don't play, craps is the only sucker bet I enjoy engaging in. But if coerced to play, I press with non-sims. Don't press with sims. But not out of love, out of an intimate knowledge of my opponent's expected actions. Out of my status as a reliable predictor in this unique circumstance.