I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I'm not sure how original it is, but I certainly don't recall seeing anything like it before.
Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal - they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.
The next day, Omega returns. This time, they offer a slightly different deal - instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality - except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.
(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)
Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice - who knows nothing of this - three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.
Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it's cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.
Humans value each other's utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.
If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies - not quite p-zombies, since they are conscious, but e-zombies - reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.
My mind rebels at the notion that such a thing might exist, even in theory, and yet ... if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.
Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn't disprove it entirely.)
I think the idea is meant to be that "one of many simulations" = "low probability" = "unimportant".
If so, I think this is simply a mistake. MugaSofer: you say that being killed in one of N simulations is just like having a 1/N chance of death. I guess you really mean 1/(N+1). Anyway, now Omega comes to you and says: unless you give me $100k (replace this with some sum that you could raise if necessary, but would be a hell of an imposition), I will simulate one copy of you and then stop simulating it at around the point of it's life you're currently at. Would you pay up? Would you pay up in the same way if the threat were "I'll flip a coin and kill you if it comes up heads"?
The right way to think about this sort of problem is still contentious, but I'm pretty sure that "make another copy of me and kill it" is not at all the same sort of outcome as "kill me with probability 1/2".
Now, suppose there are a trillion simulations of you. If you really believe what it says at the start of this article, then I think the following positions are open to you. (1) All these simulations matter about as much as any other person does. (2) All these simulations matter only about 10^-12 as much as any other person -- and so do I, here in the "real" world. Only if you abandon your belief that there's no relevant difference between simulated-you and real-you, do you have the option of saying that your simulations matter less than you do. In that case, maybe you can say that each of N simulations matters 1/N as much, though to me this feels like a bad choice.
No. He doubles my "reality", then halves it. This leaves me just as real as I was in the first place.
However, if he ran the simulation... (read more)